This is a fair point. Ken and I have went back and forth a lot on the best way to set OKRs. The first set of OKRs we wrote for the UF focused on increased volume, number of new users and developers in the space, number of “high impact, high engagement” governance proposals, etc. These are the results which all of our work would aim to have.
However, we got stuck is defining the “right” numbers to tie to those results- how much volume might increase, when (it might take a talented dev team time to build a high impact new interface), by a % or total cumulative volume, and so on and so forth. Volume is going to be highly influenced by market conditions - how do we know what to attribute to the UF vs the market? New developers and interfaces - does it make sense to measure the number of new interfaces? One interface that stems from a larger grant might have a more positive impact than 5 interfaces stemming from smaller grants. We ultimately decided to set OKRs based on grant allocation % and team focus, and the metrics we will measure grantee success by, because those are the clear inputs to the results above which are within our control.
We also think we might learn more about what numbers tied to those results (volume, # interfaces, etc.) make sense once the UF is up and running for some time (if the proposal passes!). If we do, then it could make sense to adjust OKRs in the future.
That was our thinking in setting the OKRs this way. But, if community members have suggestions on how we might be able to better measure ourselves, we do want to hear them.