Uniswap-Arbitrum Grant Program (UAGP) - Midway Learnings & Survey

As the Uniswap Arbitrum Grant Program (UAGP) reports bi-monthly (our last program/grantee updates for April can be found here), we wanted to take the opportunity of this mid-point in our program to reflect on the program’s progress, learnings along the way, and participant satisfaction through the feedback of program participants.

As such, this off-month reporting will first start with highlighting our key learnings from the UAGP to date before exploring a breakdown of our mid-way survey which gave grantees a forum for telling us how we’re doing.

Additionally, aligned with Gitcoin revamping the Gitcoin Grants Stack, our grants platform of choice, we took the opportunity of this extension to streamline application questions to account for applicant feedback we’ve received to date. Importantly, this means that our application link has changed since our last update. Submissions to the UAGP remain open and we are encouraging prospective builders to reach out and apply here.


UAGP Mid-Way Survey

With the UAGP operations working along at full speed, we wanted to take a moment to check-in with our program grantees, to better understand their experience and find any areas to improve. As such, we created a short survey - a mix of qualitative & quantitative questions - which gave UAGP grantees an opportunity to describe their experience in the program thus far. We had 9 grantees respond to the survey, with their feedback detailed below.

The responses highlighted some of our program’s strengths, with the average satisfaction rating from respondents of 9.2/10.

Respondents also noted communication and responsiveness as a strength of the UAGP committee, rating responsiveness at 9.7 out of 10 on average.

Debatably, the piece of feedback we’re proudest to share is the 9.8/10 average likelihood of participants to recommend the UAGP to peers.

In general, we’re pleased with the results of our survey and more importantly generated tangible learnings from the direct insights of program participants. We are mindful that these are opinions from accepted grantees and naturally are inclined toward more positive experiences with the UAGP. However, we still believe its worthwhile to run this exercise to test ourselves for any apparent gaps. This assumption was stress-tested by the fact no individual response rated the likelihood to recommend the UAGP to peers lower than a 9.


UAGP Learnings

Over the past five months, operating the UAGP has taught us several valuable lessons about managing a cross-ecosystem grant program, and we wanted to take a moment to highlight some of the more relevant learnings we’ve collected from our recent operations lately. A summary first, then addressing each learning in more detail below:

Key Learnings:

  1. Reporting Alignment: We adjusted our reporting processes from monthly to bimonthly to better align with the long-term focus of UAGP grants, reducing burnout and creating a more streamlined reporting flow.
  2. Cross Ecosystem Collaboration: We initiated a Grant Program alignment group to streamline cross-ecosystem collaboration and communication, addressing eligibility and overlap issues between Uniswap and Arbitrum grant programs.
  3. Prospective Applicant Perspectives: Discussions with rejected applicants highlighted the need for specific evaluation feedback, a clearer assessment rubric, and more streamlined application forms; all concerns we feel confident we have addressed.

One major insight was the opportunity to adapt our reporting processes to better align with the nature of our grants (and grantees!). Given the long-term focus of UAGP grants, and the associated longer feedback loops between project initiation and milestone achievement, we pushed reporting and tracking tasks from monthly to bimonthly. In practice, this reporting change allows grantees to focus on their work and report only the most salient details in a period, reducing the risk of burnout for both grantees and community members involved in monthly reporting. This adjustment has been the latest adjustment in what we believe to now be a very streamlined reporting flow for grantees and the DAOs.

Another insight came about when awareness for the UAGP picked began to pick up: we faced increasing overlaps with applications relevant to other parts of the Uniswap or Arbitrum ecosystem. This resulted in a manual process of having multiple bilateral Telegram chats with the other relevant Grant Programs / Foundations on the Uniswap and Arbitrum side sending applicants to different parts of the ecosystem and checking potential eligibility. While in the Uniswap ecosystem the ways are short and we have one specific counterparty for Grants (Aaron), on the Arbitrum side there are several relevant Grant programs. This led us to initiate an overarching Grant Program alignment group and helped steer initiatives to streamline this process through joint communication channels, incl. tackling topics such as potential double-dipping.

Finally, we have also had several open topics and learnings emerge from the vibrant UAGP community discussions which play out across many forums and include all stakeholders, from delegates to rejected & prospective UAGP applicants. In particular, in community discussions with rejected UAGP applicants who raised concerns about perceived incongruities in the program, we were given deeper insight into the applicant perspective and journey through the UAGP. While at times it is important to take these opinions with a grain of salt, we hear all opinions with the utmost respect and consideration:

  • Provide specific feedback to every applicant: While we would love to provide personalized feedback to every applicant, it is difficult due to the sheer volume of applications. Currently, we offer detailed feedback to applicants who specifically request it. If the DAO decides that every applicant should receive detailed denial feedback, we are happy to factor this into capacity planning for a potential extension of the program.
  • Make evaluation of all grantees public: While we have public evaluation rubrics to promote transparency on how our committee is assessing projects, in line with other leading grant programs within web3 including Arbitrum Foundation Grants and Plurality Labs, we do not make specific evaluation of grantees public. We decided to follow this market standard due to mitigate potential negative follow-on effects on denied applicants that could results out of a bad rating.
  • Reduce number of application questions: In response to feedback about the program’s application questions being difficult to align with projects, we have simplified them in our recent extension round update to make it streamline this process for applicants.

We deeply appreciate the community’s ongoing support and feedback, which help us strengthen the UAGP. We welcome all community feedback as we strive to build the best possible program.


We hope this mid-term survey and learnings help shine transparency on the current operation of the program. Our DMs are always open for thoughts or comments on what other data points to include going forward. We’re thrilled with the progress so far and are excited to continue extended operations for a few more months. Please stay tuned for next month’s full reporting as more downstream milestones (and launches!) are beginning to be achieved.


Contact Points

:white_circle: To find all info: UAGP Information Hub

:white_circle: To reach out: Discord

:white_circle: To stay up to date: Twitter

7 Likes

Hello @fin_areta,

It’s clear that you want to present the grant program in the best possible light, and judging from the survey it’s clear that the accepted applicants have very positive things to say about the program.

Still from reading the criticism from a particular rejected applicant, I got the impression that the guidelines for project evaluation were not clearly defined or communicated to the applicants, and perhaps that the Evaluation Criteria, as defined on your notion page, were not followed for all projects.

How would you respond to this criticism? If the evaluation criteria were followed, then why not provide the scores to every applicant in private? And if they were not, then shouldn’t it be added to the learnings to be made and mistakes to be acknowledged?

2 Likes

How many respondents took the survey?

The survey scores are very high, which is great. However, this can indicate a low number of respondents, respondents who are worried about anonymity and their ability to receive future grants, or survey questions that don’t fully flesh out areas for improvement. Since this survey is being heavily used to justify a good job performance, can you provide more details about the survey itself?

1 Like

I agree with you @kfx.

Surveying successful grant recipients is the same as surveying employees that got a promotion. It seems there’s a lack of best practice (in the survey method, and conflict of interest) or an attempt to mislead.

I had raised here concerns about the extension of this program.

Upon review of the topic here, and having a glance at the discussions taken in the Uniswap discord, it clearly indicates there is more to be investigated based on the community allegations.

I would suppose the best way forward would be to pause this program and establish an audit.

1 Like

Thanks for time digesting the survey! You’re absolutely right that this is a UAGP grantees survey, which was intended to scan for obvious gaps in the current grant program, and should be viewed as such (hopefully made the caveat clear in the description of it). However, you will find we also included the learnings from other stakeholders such as applicants.

And good point with the private feedback: In accordance with that, we started giving individual private feedback to our last applicants to test and are currently drafting a more extensive standardized version that will go out to all new applicants as an element of our evaluation process, regardless of their approval/denial.

Additionally, we will have a more comprehensive review before the end of our program’s which will look in more depth and layout all learnings and refinements. For this we created an extensive overview of all feedback received so far with amendments following from that.

We’ll guide through a first version in our next office hours next Thursday at 14:00 CET on this topic, as we’re eager to hear and give everyone a chance to share insights and feedback on the UAGP. You can find the link to this here.

2 Likes

Absolutely, we’d happily share more details on the survey.

The survey was taken of 9 UAGP initiatives (shown in the sources section of first slide). This included all accepted and KYB-approved grantees at the time of collection. We distributed the survey about ~1 month ago, to give adequate time for responses. Hope this helps clear up any questions!

While the scores obviously need to be taken with a grain of salt (”Ofc accepted Grantees rate higher, etc.”), these are the highest-context participants with deep insights for improving our program to be found in the learnings section.

1 Like