Learning and Adaptation
Last updated
Last updated
Achieving systems change and intersectional gender justice is complex — uncertainty and setbacks are common — and progress rarely follows a straight line. We support our partners to articulate key assumptions and hypotheses and test whether these hold over time as they track progress against agreed goals. For these reasons, Learning, Measurement and Evaluation (LME) are essential to our approach as we support program partners to achieve ambitious outcomes, to continuously improve their approaches, and to contribute to global knowledge of what works in delivering systems change and intersectional gender justice. We seek to foster a learning orientation across all our grants, supporting our program partners to champion and incorporate learning and adaptation within their organizations, document lessons, and share data, research, and findings to promote broader learning and accountability. We believe that measurement should be at the service of learning and doing; yet without meaningful measurement, learning and doing are impeded.
To help us navigate to optimal measurement approaches and designs as relevant to the partner and challenge being addressed, we strive to be:
Driven by program partner’s interest, curiosity, and desire to improve practice
Inclusive and supportive of approaches that are driven by those typically excluded from systems and learning
As simple as possible in our approach, seeking alignment with other funders whenever possible
Flexible, curious, adaptive, method agnostic
Rigorous, truthful, open to learning from failure
Transparent and open: publishing designs, tools, data, and results
Informed and connected with global and regional expertise
Read more: Section 3.4 Learning, Measurement and Evaluation of our Handbook (June 2021)
Co-Impact Guidebook: Learning, Measurement, and Evaluation [June 2021]
Impact & Outcomes Dashboards:
Harambee Youth Employment Accelerator - Systems Change Reporting & Impact Outcomes [September 2021]
One Acre Fund & Landesa - Impact Dashboard [June 2021]
Learning, Measurement, and Evaluation (LME) Plans:
Harambee - LME Plan [June 2021]
One Acre Fund & Landesa - LME Plan [June 2021]
Video: Learning, Monitoring and Evaluation with Varja Lipovsek (Co-Impact) & Lisha McCormick (Last Mile Health) [July 2021]
National Council of Applied Economic Research (NCAER) | Working Paper | Measuring Women’s Empowerment in the Global South [March 2022]
This review traces the intellectual and historic context in which women’s status and empowerment in lower- and middle-income countries has been measured; the conceptual and operationalization challenges in shaping research questions; the use of empirical measures and their connection to levels of social analysis, and the identification of emerging directions for future research.
Feedback Labs | Tools Repository
The organizations and tool providers listed on this page are core contributors to the Feedback Labs’ community and have demonstrated expertise in helping organizations not only collect feedback, but engage with constituents along many points of the feedback loop.
Africa Centre for Evidence & EPPI Centre | Toolkit | Engaging Stakeholders with Evidence and Uncertainty [July 2021]
This toolkit is for policy-makers who want to engage with evidence and with stakeholders in their decision-making process. And for impact evaluators or evidence synthesis teams who want to engage stakeholders in the production and use of their research.
Hewlett Foundation | Article | The moral case for evidence in policymaking by Ruth Levine [September 2017]
Oxfam | Discussion Paper | Applying Feminist Principles to Program Monitoring, Evaluation, Accountability and Learning [July 2017]
This paper aims to share reflections on how to apply feminist principles to monitoring, evaluation, accountability and learning (MEAL) practice. It includes case studies of Oxfam’s experience of applying these principles to its programs.
Innovations for Poverty Action (IPA) | Toolkit | Goldilocks Toolkit: Finding the Right Fit in Monitoring & Evaluation [February 2016]
This toolkit puts forth four key principles to help organizations of all sizes build strong systems of data collection for both activity monitoring and impact evaluation. The principles are credible, actionable, responsible, and transportable, or CART.