Join the Conversation - What evidence base counts when evaluating good practice in program delivery?

This webinar reflected on the experiences of the FAST program in the NT to explore what counts as good practice in Indigenous community programs.

Please post your comments, and additional questions for our presenter, in the comments section below.

A recording of the webinar, with presentation slides and a transcript, will soon be made available on the CFCA websiteSubscribe to CFCA news to receive a notification when these resources are available. 

For more information on this topic, please see the Evaluation tools and resources section of the website which includes:

  • General evaluation tools and resources
  • Indigenous evaluation (Australia and internationally)
  • Ethics guides and protocols for working with Indigenous people

Comments (10)

Q: Should evaluation and interventions be fun? Should it be stated in plans? Q:Should measurable outcomes (from the funder) be dynamic? How can that be achived? Q: Does publication bias create a distorted evidence based outcome from funders that will inevitably need to be changed on ground?
In terms of the evaluation approach, would you describe the FAST project as an "action learning' project within the key parameters of the FAST evaluation framework?
What are some of the elements of the Promising Practice Profile?
Are the local team meetings convened as sit-down discussions or do these groups actually do things and create events together?
Is the evaluation of the program a comparative approach between how particular parameters were perceived at baseline and thence at different points (1) across the 8 weeks of implementation? (ii) subsequently beyond the 8 week implementation period?
Are we talking about "facilitating" the use of "reflection" in various settings from individual/ group to community wide, when "evaluating" programs and trying to understand what is going on verses " evaluation" which i feel can be more technical". Any thoughts ??
Thanks for your questions, everyone. Our presenters will endeavour to provide responses in the next few working days. Thanks, Adam
Dianne, depending on what you mean by reflection and evaluation, but I'd suggest that what we are talking about is the approaches that employ reflective and reflexive approaches within an overarching evaluation framework. In my work with FAST we have used mixed methods approaches to evaluate how the program has worked at a number of levels (for example for parent participants, children, schools and team members). The changes we are looking for are both qualitative and quantitative.
Bill, FAST internationally uses a pre and post instrument which measures change in participants before and after the program. We've also done a more qualitative evaluation of the longer term change in participants beyond the life of the 8 week program. See http://www.crc-rep.com.au/resource/CR002_FASTatGillenPrimarySchool.pdf for more on this. One of the challenges for any program evaluation like this is that when you measure change across the program you only capture those who have stuck with the program for the duration. And of course they are more likely to gain a benefit from the program than those who drop out or participate intermittently. The parameters are across 5 or 6 domains. If you'd like more info on what they are I will gladly provide that information.
Bill, the approach used to evaluate FAST has several elements to it. I'd describe it as participatory, reflective and reflexive using mixed methods to inform practice/program design and also to inform funders who want to know if and how the program has worked.