Monitoring, Evaluation and Impact Assessment

Untitled Document

Monitoring is systematic and continuous assessment of the progress of a piece of work over time, which checks if things are going ‘according to the plan’ and enables timely adjustments to be made in a methodical way.

Evaluation is the assessment of the relevance, performance, efficiency and impact of a piece of work with respect to its stated objectives.

Monitoring and Evaluation (M&E) as a concept has become an intrinsic component in the development project cycle and has evolved as a project management tool. It may be conducted at the programme planning stage (Ex–ante Evaluation), half way through the project (Mid-term Evaluation), soon after the project ends to assess the effects of the project (End of Project Evaluation) or after a certain period after the end of a project to know whether the changes in terms of impacts have been achieved or not (Ex-Post Evaluation).

Evaluations and impact of interventions are assessed not only in the context of projects, but also at the organisational level, as projects are situated in organisations.



An organisation’s approach, its strategies, resources (human as well as financial), structures, leadership and management affect the outcomes and impacts of the projects. In fact, projects are derived from the organisation’s vision, mission and strategies. Therefore, there could be differences between organisations in terms of means to bring changes in the society. In the diagram, the inner circles depict projects and the outer circles are the organisations and society. Monitoring focuses on ‘outputs’ achieved from the projects while evaluation focuses on what happened in the projects and within the organisation, and the outer circle of impact looks at the larger changes in society in terms of what the project changed eventually. Evaluation also, at times, looks at changes within organisations.


The key purposes of M&E include:

›› To improve management and process planning.
   
›› To understand perspectives of stakeholders.
   
›› To assess project results.
   
›› To ensure accountability.
   
›› To promote learning.
   

The failure of programmes and projects established through top down approaches to achieve developmental goals made practitioners realize the importance of participation, that the ‘people’ for whom the projects are made should participate in the process of planning, monitoring and evaluation, leading to sustainability of the project. Greater participation would also lead to ownership of the project by the people, which would promote learning and action through a process of gradual empowerment. This led to growing importance of Participatory Monitoring and Evaluation (PME) in assessing the effectiveness of development projects.

PME draws roots from participatory research, action research and participatory learning action emphasising the involvement of ‘people’ in critical investigation, analysis and action. In PME donors, project implementing agencies and community members are equal partners at each stage including at indicator-building stage, choice of methods of data collection, developing formats, data collection, data analysis and taking joint action. The focus is on learning and action.

The following table illustrates the difference between conventional M&E and PME.


Criteria Conventional M&E Participatory M&E
Who initiates? Donor Donor and project stakeholders
Purpose? Donor accountability Capacity building, increasing ownership over results, and multi-stake holder accountability
Who evaluates? External evaluator Project stakeholders assisted by a PME facilitator
Terms of Reference Donor with limited input from projects Project stakeholders
Methods Survey, questionnaire, semi-structured interviewing, focus group discussions etc. Range of methods such as Participatory Learning and Action, Appreciative Inquiry, and Most Significant Change Stories, Outcome Mapping, testimonials etc.
Outcome Final report circulated within the donor institution, with copies to project management Better understanding of local realities; stakeholders involved in analysis and decision making regarding project information; stakeholders able to adjust project strategies and activities to better meet results


It is also important to say that conventional M&E has not completely withered away despite the emergence of PME. Most of the times, in the context of projects and programmes, both conventional M&E and PME methodologies are used. In recent times when aid effectiveness has become a burgeoning issue, there is again a strong push from donor agencies to have externally led evaluations and building robust monitoring systems. PME invites critiques because of being subjective in its approaches. However, at the same time, people-centred approaches are gaining prominence where people are treated not as respondents but as important actors to assess changes and impacts for which an investment is made by donors.


Practice In Participation invites resources that build practitioners’ understanding of PME processes, outcomes, principles and challenges. These could include:


›› Case studies of how to build sustained involvement of stakeholders over a period of time
   
›› Field experiments in negotiation and trust building with grassroots communities
   
›› Field lessons in monitoring and evaluation practices
   
›› Challenges in implementing PME principles (such as the principle of participation, the principle of learning, the principle of negotiation, and the principle of flexibility) in grassroots communities
   
›› Organisational scaling up of PME
   
›› Conference and workshop reports on PME practices
   
›› Theoretical resources and readings on PME practices
   
 

This concept note draws primarily on the following references:


Bakewell, O.J. Adams and B. Pratt, 2003. Sharpening the Development Process: A Practical Guide to Monitoring and Evaluation. INTRAC, UK


Estrella M. & J. Gaventa, 1998. Who Counts Reality? Participatory Monitoring and Evaluation, A Literature Review, Working Paper, IDS, UK


Parks, Will, Denise Grayfield, Jim Hunt and Allish Byrne, 2005. Who Measures Change? An Introduction to Participatory Monitoring and Evaluation, Communication for Social Change Consortium