Closing Report

A report on auditing in Project Catalyst between February & June 2022


Audit Circle Fund 7 - Closing Report

Proposal info

Catalyst Audit Circle https://cardano.ideascale.com/c/idea/381354 Budget: $12,700 Part of the Improve And Grow Auditability challenge, Fund 7

Start date: March 2022 End date: June 2022

The voting

A total of 276 unique wallets voted on the proposal, with ₳67,964,646 voting Yes, and ₳1,382,596 voting No. So the proposal was the 7th of 8 successful proposals in its challenge; and the average (mean) wallet size voting on it was ₳251,258, which at the time (Feb 2022) was worth approximately $289,000.

Project team

George Lovegrove: PACE (creator of original proposal idea) Stephen Whitenstall: QADAO (proposal owner) Jo Allum Phil Khoo: AIM Vanessa Cardui: QADAO, Funded Proposers Subcircle Ron Hill: SPOCRA Thorsten Pottebaum Eric Helms: SCATDAO Matthias Sieber: Loxe, Inc Andre Diamond and Miroslav Rajh (Treasury Guild) Megan Widney (documentation / minutes)

Challenge KPIs and how we addressed them

The main success metric for the F7 Auditability challenge was Number of proposals audited; plus a recognition that proposals in different challenges should be audited in ways appropriate to their challenge.

But “auditing proposals” as a metric was perhaps premature, since in F7, groundwork still needed to be done to develop the thinking, the methodologies, and the tooling for audit. Several proposals in the challenge addressed methodologies and tooling; but Audit Circle set out to address the underlying thinking, and to look at what needs to be in place to enable effective audit. We took a “problem-sensing” approach, using our diverse connections in the Catalyst ecosystem to find out how the current process was working, and to take the temperature of how the community (both proposers, and others) think the process should be. Both in our meetings and in our survey to proposers, we explored the idea that proposals in different challenges might need to be audited in different ways.

The Challenge described its core aim as to ensure transparency in the use of Cardano's treasury resources. Our meetings discussed how Catalyst might do this, and how far the existing monitoring practices achieve it; we also discussed what “transparency” means in practice. A key issue that emerged in our problem-sensing was the balance between transparency and confidentiality/privacy, particularly around proposers’ IP, and around any personal data that they might hold, for example from project participants.

Proposal aims and how we addressed them

The concept of Audit Circle was similar to that of Catalyst Circle, but for audit issues. Its aims were to problem-sense issues around auditability (including highlighting any confusion in the process, and considering how to identify bad actors); prioritise the issues; and collaborate with the community to suggest potential solutions. The plan was that any solutions proposed could then go through the Catalyst funding process.

We addressed these aims by:

  1. Listening: we regularly attended Coordinator meetings, and also held regular Office Hours to reach proposers in the Eastern hemisphere. We also publicised audit issues through our regular fortnightly Town Hall slides, an After TownHall session, a Gimbalabs Playground, regular postings in the Catalyst Weekly newsletter, our public discussion channel on the CGO Discord server, and by videoing our meetings and sharing them via Youtube and through summaries on GitBook. Our problem-sensing was aided above all, by our large team. It was important to our success that the team consisted of experienced Catalyst community members, who each had a long-standing interest in audit and understood the issues. Together, the team had a diverse range of perspectives, and links to many different parts of the ecosystem; so we were able to hear a lot of different input, both via attending public meetings and events, and via private conversations with proposers and others.

  2. Asking: We planned to run two surveys on audit issues - one internal (to IOG staff) and one external (to funded proposers)- see below for details. Both surveys focused on confusion or barriers that proposers face in the current system. This also addresses the key proposal aim of identifying bad actors, since if the community understands more about the barriers proposers face, then it becomes easier to tell the difference between honest proposers who are facing obstacles (where the problem might not be the proposer, but the process), and genuine bad actors.

  3. Discussing solutions: again, our diversity of viewpoints was a strength. Our team were involved in related initiatives throughout Cardano, enabling us to collaborate and discuss on several fronts. Two examples were running a Gimbalabs Playground on audit issues (Matthias); and discussing with the Funded Proposers’ and PA Subcircles on audit-related changes to the proposal form (Vanessa. See discussion: Draft Proposal submission form guidance notes v1). Developing finished audit solutions was naturally beyond what this fairly small proposal could achieve - although we did maintain awareness of solutions being developed elsewhere in Catalyst. One sadly unachievable solution was our continuation proposal, Audit Circle F8, which planned to build on our work and create a broader space for the community to focus on audit; unfortunately, it just missed out on funding.

Key achievements

Our main output was designing, running and analysing our external survey to proposers. We worked quite hard on engaging the community with it, and had 47 responses. Although this isn’t enough to be statistically significant, it is fairly high for an un-incentivised Catalyst survey, and is more than enough to give some interesting qualitative insights and to start some conversations. We shared our findings on 2nd March 2023 at the new Accountability working group led by IOG.

See below for detailed insights from the survey.

Another achievement was bringing together a diverse group of skilled people, who are interested in Catalyst auditability and would like to see it improved. Although the group was disbanded in an official capacity at the end of the project, we enjoyed working together, and are interested in continuing to work on audit in some capacity. Cultivating such a team for Catalyst can be considered a key achievement.

Documenting and sharing our work was another achievement. We learnt, however, that videos of meetings are not the best way to do this. Our Youtube views were low - this may be partly because audit was a minority interest in Catalyst at the start of F7, but it is more likely because watching a 1-hour video is a barrier for most people. We mitigated this by sharing our work in other ways - regular slides at Wednesday Town Hall; regular Office Hours; presence at Coordinator meetings; adding content to the Catalyst Weekly newsletter; running an After Town Hall and a Gimbalabs Playground; maintaining a public channel in the Community Governance Oversight Discord server; text summaries of meetings on GitBook; and asking for responses to our external survey via Telegram, Twitter, and DMs on Ideascale. We perhaps could have done more to engage people with the meetings themselves; but since we had not budgeted for this, we didn’t have the resources to do so, and it’s an achievement that we worked effectively with the limited resources we had.

This is an important lesson, not only for us, but for proposers at large - the vital task of sharing your work with the community takes time and effort, and needs to be budgeted for.

Key learnings

From our meetings and our problem-sensing

Many of our problem-sensing learnings were about monthly reporting. Many proposers experience reporting as friction, an unwelcome chore that doesn’t help them, or even as something to be dreaded; in their view, it was only serving (ill-defined) "audit" needs, and did not help them at all. It was suggested that if reporting data was more accessible, it could support proposers' needs too - it could help proposers with marketing, internal progress reporting, and soliciting helpful advice; it could call out blockers, ask for solutions from others, look for collaborations and partnerships, and sense problems, as well as keeping proposals accountable. To achieve this, Catalyst would have to find a way to ensure that reporting data is not locked up in reports or spreadsheets, but collated and shared. If it could be easily accessed and interacted with, then it could be a way for proposers to address issues.

Two other suggestions were - a regular “ReportFest” on a regular basis to surface key insights from reporting - additional bonuses to incentivise excellent reporting.

We also learnt that many proposers feel confused and uneasy about the frequent unexplained changes to the monthly reporting form, and have no clear sense of why certain things are asked.

An example was the question on the “size of your community”, and the statement on the monthly reporting form that IOG is “interested in” projects with big communities.

This was added to the form with no explanation. As there was nothing in the proposal process to suggest that projects with smaller communities would be less welcome, it was an unpleasant surprise for some proposers who were not planning to build a huge community, only to then read that IOG is not very interested in their work.

Some proposers have worried that they’re being judged negatively for not having a big community around what they’re doing - even if the reason is that it would be inappropriate to their project, or because they didn’t include a big marketing budget to build one.

Several proposers told us that the monthly reporting form should be very simple, and should be the same each month, so that projects can prepare. Also, many proposers don’t have a clear sense of what information is being asked for; they wanted guidance, but guidance that was appropriate to their type of project. This theme of “one size doesn’t fit all” recurred often in our problem-sensing; different types of proposal need to report in different ways. We noted that perhaps each challenge team could devise guidance on what a proposer should include.

Several proposers raised questions about the overall value and purpose of monthly reporting in its current form. They saw it as a perfunctory, “tick-box” affair, and felt the information collected doesn't lead anywhere, doesn’t increase transparency, and doesn’t actually help track a project’s progress. They felt the purpose of reporting should be more clearly defined, and the reporting form designed around that purpose.

There were also more fundamental questions raised about the purpose of Catalyst audit per se. Is it primarily to mitigate risk? To keep tabs on proposers’ work? Or to collate learning from projects? Perhaps a clearer purpose could help us create more effective audit processes.

Lastly, there were several suggestions to develop some kind of Audit token. The details of these suggestions varied, but it could be an interesting area for Catalyst to explore.

Our Internal Survey (to IOG)

We planned to run an internal survey in the form of a recorded discussion with key IOG staff responsible for proposal reporting; but unfortunately, we found we had both under-budgeted, and run out of time, so we were unable to make this happen. We did draw on our problem-sensing to devise a list of questions, however - possibly, IOG could address them in a different forum at some future date. Audit Circle suggested internal survey questions

Our External Survey (to proposers)

For the survey, we took the idea that audit begins with data - unless proposers monitor their projects and collect data about them, no audit is possible. So we asked proposers about how they collect and store project data, and looked for any issues with their knowledge or confidence about what to collect, that might be holding them back from effective monitoring. We also asked what they understand about current monitoring requirements; what they think of them (especially, whether the current process fits the work they are doing and enables them to capture what they want to show); and what they think the requirements ought to be.

A copy of the survey itself can be seen here; and here or here for our detailed analysis of the survey.

But in brief, our key recommendations from the survey are:

  • Develop a shared core vocabulary in Catalyst on audit issues, agreeing what we mean by terms such as “audit”, “monitoring”, “outputs”, “outcomes”, “impact”, “process” etc, to enable us to talk about it..

  • Look further into the question of whether proposers do feel the “sense of burden and dread about reporting” that some have reported, and if so, why.

  • Create a clearer, easy-to-find guide to what a proposal’s monitoring obligations are, to be given to all proposers on being funded.

  • Raise awareness in Catalyst that it is normal, and even desirable, for a proposal team to adjust its metrics and its monitoring during the course of a project, in response to what its work reveals.

  • Hold some peer-led community workshops on audit. Key topics:

    • How to create an effective monitoring metric which actually measures what the proposer is trying to show. Include different approaches for different types of proposals; and qualitative metrics as well as quantitative ones.

    • How to monitor whether you are reaching your aims.

  • Practical approaches to evidencing impact and outcomes (perhaps drawing on knowledge from the arts and the third sector, where doing this is common and widespread), and how Catalyst proposals might do this if they choose.

  • Once we all know more about how to monitor a wide range of things, have a broad community conversation on what proposers should be monitoring - their outcomes, their impact, their outputs, their processes and praxis, or all of these? Bear in mind that this might vary for different types of proposal.

  • Explore how proposers are storing project data, and the relative merits of storage in one place or several.

  • Develop better ideas for how proposers can share their data, and develop guidelines on what data they should be expected to share (particularly considering privacy, data protection, and IP).

  • Discuss what “reflective practice” might look like for different types of proposals and what routes there should be for recording and sharing insights drawn from reflective practice.

  • Be aware, in all of this, that one size doesn’t fit all, and that different types of proposals need to do things differently.

Retrospective and next steps

We have continued to discuss audit issues informally, in Discord and elsewhere. We had hoped to carry on meeting informally, and perhaps even holding some open workshops or discussions; but unfortunately, this wasn’t possible because we’ve all needed to prioritise funded work. This underlines the importance of resourcing this kind of discursive and ideas-based work - it’s unsustainable for it to be a volunteer role.

Our external survey revealed some possible directions for future work on audit, and particularly for further discussion and research with the community.

Conclusions

At present in Catalyst, although proposers do monitor and (to a varying degree) evaluate their work, this is not the same thing as “audit”. Our view is that very little actual audit currently happens in Catalyst at all. What audit there is, is conducted by IOG in the form of assessing monthly reports (and we note that this has become more overtly like “audit” with the addition of the Fund 9 Milestones pilot). But because the criteria for what constitutes an acceptable report have never been shared, and since any questioning happens privately between proposer and IOG, this is not audit in the commonly-accepted sense.

It would be positive to see the audit function begin to sit more with the community than with IOG; but we have questions about what sort of expertise, if any, is needed to audit proposals (whether technical knowledge, or knowledge and experience in areas such as community engagement or education); and, given that in the Milestones pilot, it seems to be falling to Challenge Teams to take the audit role, whether they are best placed to do this. We are conscious of the potential conflict for a Challenge Team if they need to be both a supportive friend, and an auditor - perhaps these need to be separate roles.

Audit Circle was not funded to directly undertake audit; but we are aware of a range of approaches that have been either suggested or tried in the community - reviewing projects, comparing proposal budgets to actual expenditure, reviewing outcomes, tracking monthly update statements for progress, qualitative approaches including methodologies such as focus groups, sentiment analysis, Theory of Change, etc. But as far as we can tell, none of these are happening in any sustained way in the community; often, a blocker is the authority to access teams’ information to be able to audit/assess it. Overall, we would like to see several audit pilots experimenting with different approaches in practice, always with an awareness that different kinds of projects will need different audit methods.

Last updated