Personne :
Jacob, Steve

En cours de chargement...
Photo de profil
Adresse électronique
Date de naissance
Projets de recherche
Structures organisationnelles
Fonction
Nom de famille
Jacob
Prénom
Steve
Affiliation
Université Laval. Département de science politique
ISNI
ORCID
Identifiant Canadiana
ncf10562629
person.page.name

Résultats de recherche

Voici les éléments 1 - 3 sur 3
  • PublicationRestreint
    La fonction d’évaluation dans l’administration publique québécoise : analyse de la cohérence du système d’actions
    (Institute of Public Administration of Canada, 2014-03-01) Jacob, Steve; Smits, Pernelle
    Cet article analyse la fonction d’évaluation dans l’administration publique du Québec au cours des années 2000 comme un système organisé d’actions (valeurs, environnement, ressources et modalités, pratique, effets) avec ses cohérences et incohérences. Trois constats émergent : les activités de soutien de la pratique évaluative de la part des ministères centraux sont quasiment absentes, la transparence auprès des citoyens dans le processus évaluatif est peu présente, la réalisation et l’utilisation d’évaluations de portée stratégique parait peu fréquente. Cet article met en évidence certaines cohérences et incohérences d’une fonction de gestion, et propose une façon directe et systématique de souligner les points à améliorer.
  • PublicationAccès libre
    Using systematic review methods within a Ph.D. dissertation in political science : challenges and lessons learned from practice
    (Taylor & Francis, 2012-10-09) Daigneault, Pierre-Marc; Jacob, Steve; Ouimet, Mathieu
    Systematic review and synthesis methods have gained wide acceptance within the social sciences and, as a result, many postgraduate students now consider using them for their thesis or dissertation research. However, students are rarely aware of all the concrete implications that their decision entails. This reflective narrative reports the experience of a political science student who began to conduct a systematic review as part of his PhD dissertation but who did not complete it. The aim of this article is to identify challenges and lessons learned from this experience and to formulate recommendations for postgraduate students who wish to make an informed choice with respect to the use of these methods.
  • PublicationAccès libre
    Measuring stakeholder participation in evaluation : an empirical validation of the Participatory Evaluation Measurement Instrument (PEMI)
    (Thousand Oaks, CA : Sage Publications, 2012-08-31) Daigneault, Pierre-Marc; Jacob, Steve; Tremblay, Joël
    Background. Stakeholder participation is an important trend in the field of program evaluation. Although a few measurement instruments have been proposed, they either have not been empirically validated or do not cover the full content of the concept. Objectives. This study consists of a first empirical validation of a measurement instrument that fully covers the content of participation, namely the Participatory Evaluation Measurement Instrument (PEMI). It specifically examines 1) the intercoder reliability of scores derived by two research assistants on published evaluation cases; 2) the convergence between the scores of coders and those of key respondents (i.e., authors); and 3) the convergence between the authors’ scores on the PEMI and the Evaluation Involvement Scale (EIS). Sample. A purposive sample of 40 cases drawn from the evaluation literature was used to assess reliability. One author per case in this sample was then invited to participate in a survey; 25 fully usable questionnaires were received. Measures. Stakeholder participation was measured on nominal and ordinal scales. Cohen’s kappa, the intraclass correlation coefficient and Spearman’s rho were used to assess reliability and convergence. Results. Reliability results ranged from fair to excellent. Convergence between coders’ and authors’ scores ranged from poor to good. Scores derived from the PEMI and the EIS were moderately associated. Conclusions. Evidence from this study is strong in the case of intercoder reliability and ranges from weak to strong in the case of convergent validation. Globally, this suggests that the PEMI can produce scores that are both reliable and valid.