Congratulations to Uniswap community. Let’s put this victory into context - this proposal was a blatant abuse of community’s time, when it could have been well spent in implementing and discussing autonomous proposals. Despite what some people are pushing for here, we should not rush into any other proposals, unless their merits are fully clear to the community.
Yes, Congratulations all-around, this is a big victory for Uniswap.
Hopefully we can learn from this, and recognize how close we were to loosing the essence of what this space is trying to accomplish: Decentralization.
I pray that the next proposal will have a chance to be discussed by the community (and maybe even get a 3. party audit or two) before going up for vote. Lets all be more critical of proposals from now on…
Thank you to anyone who read this thread or replied, your knowledge helps further our community
If Dharma really cares for their customers and UNI, I believe there’s a better solution. It seems, not sure, that they have lots of voting power therefore tokens. How about this? Give some of that to the customers. How many customers do they have … devided by 15 million…fair enough for my taste. The way this was done…it seems hard to believe, that this is for their customers.
I read a post on twitter, before the end of the voting period that they took care of it and customers will get UNI.
Hey @Dmills, catching up on the thread from where I left off a couple days ago–
I think you hit on a fundamental axiom for proposals: the burden of proof that a proposal comprises an an improvement over the status quo should fall on the proposer. Or, put another way–every proposal is in bad faith until it’s proved not to be.
The point of these forums is to allow intelligent and rational discussion of proposals and give proposers their day in court, so to speak. To preserve the integrity of the protocol as a battle-tested, proven solution, any changes need to be fully interrogated. The whole point of playing devil’s advocate is that if an idea is worthwhile, it can stand up to the strongest critiques.
I need to look at this idea further, but…
My understanding of a quorum: the minimum number of stakeholder attendees for a given idea, such that no highly organized minority bloc can push through proposals until “everyone” (for whatever minimum “everyone” we define) has weighed in.
This formula simply establishes a baseline for consensus, or at least, a different idea of supermajority. I’m not sure that it relates to the discussion around quorums. That said, where were you pulling this from? I appreciated the paper your referenced earlier in this thread; was this from there or elsewhere? Much appreciate any resources you can share.
I’d also add that it gets more complex with the addition of a third “abstain” option for voting, which I would suggest be included for quorum related conversations. Allowing people to abstain shows that they were present, they read through the proposal, and they don’t feel strongly one way or the other.
This is different, of course, from simply non-voting.
The quorum constraint we have now is a factor of available UNI governance tokens, NOT ‘delegated’ tokens. This is highly problematic, especially in the beginning when the delegates have fewer votes. Confusion around ‘self-delegation’ and ‘timelocked snapshots’ made this worse.
A good strategy to mitigate ‘highly organized minorities’ is to give tenured voting power to UNI holders. As an example, 1000 UNI that has 1 year of tenure in a wallet address may represent 1500 votes, and possibly 2000 after 2 years. I’m sure the curve wouldn’t be linear, but you get the point. This balances the long-term interests of all token holders with the short-termed interests of investors who may have poor intent.
The abstain vote should be reserved for those who don’t agree to the need to vote on a given proposal. This should be a thoughtful and precise decision. Unfortunately, because we have transparent voting metrics for the duration, it turns into a slovenly display of politicking, story weaving of the most elaborate fantasies, and unnecessary drama. Instead, let’s have a ‘proposal baking’ period of 6 days, where we can debate its merits without the added weight of a vote-count leaderboard. Then, on the last day we engage in a ‘hidden’ vote. Those who normally abstain because they want to save on gas fees, or because they realize the vote will fail quorum, are now forced to cast their vote. Put your money where your mouth is. If you don’t have skin in the game, you should sell your UNI and buy meme instead. I mean that sincerely. Governance is not a joke. (that wasn’t directed at you jumnhy)
Great thoughts jumnhy.
Curve has experimented with time-locking CRV to achieve a similar “tenured” affect, although I believe their system gives a weight boost immediately on lock (ie, you lock for 4 years, you get the full 2.5(?)x boost immediately). If technically feasible, duration in a given wallet address would seem to be a better metric to me; or a soft “lock” that allows “vote weight vesting” that only applies the weight as it is earned through tenure.
I think that’s a better system than giving the weighted voting power out immediately, but I’m not a game theorist.
To respond to your other points re: a 6-day voting period, and a hidden vote: I understand the concerns around unnecessary dramatics. We’ve seen in play out here and in other threads around Uniswap’s first handful of proposed proposals.
However, “politicking” could be better reframed as coordination. Dependent on our governance parameters for a proposal minimum vote count, and a quorum, and proposal passage, we might arrive at a “better” solution that needs less coordination and politics to move the needle, but the status quo demands delegation and coordination of multiple delegated blocs to achieve anything.
For representative democracy of this style, I think we need the radical transparency to incentivize honest behavior on the part of the delegates.
It does come down to the details; how does one do a truly “hidden” vote? Will the “voting record” for delegates be visible after the fact? Public coordination, again, contingent on our selected parameter set, may still be a necessary component, so we’d likely see delegations coordinating both publicly and privately, but now without oversight from the delegators.
In a hidden vote, how does one “realize” that a vote will fail quorum? I’m not sure I understand. That’s part of why I think a degree of transparency is necessary.
In practice on other platforms, we’ve seen blocs (even where delegation was off-chain/informal amongst a private syndicate) flip votes at the last minute of voting. Your proposal of a “hidden” vote is in effect the same thing as asking everyone to vote in that last minute.
I agree that we need a system that incentivizes participation for folks with skin in the game. Governance is no joke. I’m just not sure that a hidden vote (again, contingent on other parameters) is a panacea.
appreciate it once again @jumnhy
in my line of work, which is defined as ‘highly complex’ organizational behaviour, outcomes are never a product of ‘thought’, or ‘due dilligence’, or ‘strategy’. They are more a product of ‘rapid experimentation’ and ‘learning’.
We all have great thoughts. There is no precedence for how a DAO should work in modern literature. I’ve scanned it. We barely have a grasp of how ‘representative democracy’ should work effectively, as today’s bi-partisan structure seems to clearly demonstrate. So, once we get over the fact that there ‘is no right answer’, we are only left with ‘let’s try something! and let’s do it quickly’. The next Uniswap-killer is around the corner, and we cannot afford to be slow or methodical. So yeah @jumnhy, let’s try out some of your thoughts sprinkled in with some of mine, but they key is, let’s experiment safely to figure out what works.
So, you see, i’m not necessarily married to my ideas. They are just ideas for the sake of learning if they work or not. It’s a difficult mindset to enter into, but it is essential in the modern context. Moore’s Law doesn’t give us any place to hide. If you really want to geek out, the ‘law of requisite variety’ from cybernetics research gives us even less room to hide. Complex systems survive on the edge of discomfort. It’s how the physics works.
Hidden, not-hidden, quorum threshold, radical transparency. These are all levers that trigger the butterfly effect in systems. Let’s pull them and see what happens. If my assumption that the ‘delegates’ are faithful actors intending to improve the system is true, then we can afford to move quickly.
If jeff bezos can adopt a complexity mindset of rapid experimentation with Amazon, we have no excuses.
I signed up to say I am against this. I am not a conspiracy theorist. The only person who would be for a package deal like this, is a Dharma user. Period. So I think posters saying they are for this should clarify whether they are also a Dharma user, and if not also explain why they hell they are for this.
As always, a thoughtful reply.
There’s not necessarily a blueprint for how a DAO should work, but mechanism design is more broadly the area I’ve been reading up on. Organizational behavior is another fascinating lens to look at this through. Mechanism design is full of oddball edge cases that leave me apprehensive about too aggressive an approach in radical experimentation. That said, I appreciate the go-getter mindset.
I’ll highlight one additional area of your response: DELEGATES are likely to be good-faith actors. They only derive their power through the consent of the governed, as it were.
Whales, particularly exchanges and the like, are not such good actors. They are self-interested rather than organizationally interested.
I think stagnation is less of an existential threat to Uniswap than you do; while in a broad sense, staying nimble and uncomfortable can be organizationally healthy, I’d emphasize that we need to do so safely. Particularly with respect to governance, we don’t have a particularly robust risk analysis framework yet to know what changes are going to break this and, in all likelihood, do so irreversibly.
In this vein, I appreciated @tarun’s in-depth analysis of threat vectors to governance. (Proposal: Reduce amount of UNIs required to submit governance proposal) for those who missed it earlier).
Some of our problems can be solved with engineering solutions; others, as you suggest, will go through iterative development through rapid experimentation.
As an aside, I always look forward to reading your posts, @rabbidfly. Like yourself, I’m not wedded to any ideas in particular, and it’s precisely this sort of critical discourse that helps us all figure this out.
I believe this is correct and this proposal would be in our favour!!
Yeah, i read @tarun 's threat modeling treatise. I like it, but i don’t think the general approach will work on principle. The foundation of my claim comes from cybernetics and ‘law of requisite variety’. Essentially, it says “the internal complexity of the system (uniswap) must match the external complexity it confronts”.
Trying to predict all outcomes in a threat analysis is important, but you can’t stop there. This is the difference between being robust and resilient. Robust systems end up failing because they cannot possibly keep up with all the threat types and frequency. Resilient systems focus on ‘rapid response’ mechanisms to address the threat when it happens.
I can’t believe i’m saying this, but Uniswap needs to take a page out of Google’s resiliency engineering handbook (we have some former google employees here). Google faces hundreds, if not thousands, of threats per day. The user never notices. Why? Because instead of focusing on robustness (threat modeling every possible type of threat), they instead focus on resiliency (isolate, identify, repair, CICD that sucker in seconds, and get it out the door into production before anyone blinks).
This is why i’m insisting on governance progress biasing resiliency and rapid response, rather than the opposite. When a true threat arrives, we will have lost the operational resiliency to address it quickly. That’s my fear in a nutshell. As for specifics of governance, i don’t care as much. I only care that we move progressively towards a resilient state, and will object to governance proposals that would constrain this essential attribute of all complex adaptive modern organizations.
The irony among crypto-natives is that the best template for navigating change is still outside our domain.
What is mechanism design?
I hear you on that front.
The reason I reference mechanism design is that unlike conventional organizations, DAOs are governed by a set of rules that are functionally immutable, so at least some bits of entropy are minimized. In essence, mechanism design is tokenomics crossed with game theory.
Here’s a great primer:
Are you familiar with Aavenomics, Aave’s proposed governance and tokenomics framework? I think that’s a good synthesis of economics and governance.
I think by combining a cybernetics-oriented and mechanism design informed approach to our governance process, we can do some great things here.
That said, I can’t help but mention: I have a knee-jerk response to the “I don’t care/want to engage with/know how X is done, I just care that X happens!”
e.g. “We have to stop climate change. Not doing so represents an existential threat to our species. I don’t care as much about the specifics of environmental science, I only care that the world move progressively away from destroying the planet.”
Your in-depth posts across the forum bely the notion that this is all you’re saying, but like a somewhat inept middle manager, that sentiment feels like you’re demanding results without feeling any commitment to participating in the work.
In that sense: concretely, what does cultivating resilience look like in our community here?
I see you waging the good fight in terms of cultivating a lean, experimental culture. I see references to rapid experimentation as an exploratory methodology and rapid response as a threat mitigation modality.
What does this look like in the context of Uniswap, when balanced against stability of the platform, in the context of a DAO where certain rules are immutable, and their consequences, irreversible-- certain spaghetti that, once thrown, might stick on the wall and rot for eternity?
I’m not sure that I have examples of disastrous governance decisions, but I certainly have examples of poor mechanism design. The idea will be to marry these two–the thoughtful analysis needed to find theoretical future equilibria against a resilient organizational structure.
I’m pretty clear on how one does the mechanism design part, (though I’m just an informed layman) but I’m still struggling to understand what it is you’re asking for in regards to achieving resilience. What are the intermediate steps?
Ha! Lots to think about. (where to start…)
DAOs are governed by a set of rules that are functionally immutable, so at least some bits of entropy are minimized. In essence, mechanism design is tokenomics crossed with game theory.
I’m essentially a creature of the Santa Fe complexity group. In full disclosure, my philosophical biases are heavily influenced by the belief that complex adaptive systems cannot be measured through mechanistic designs. This is an a priori argument for me, and makes it difficult to evaluate such systems. To be clear, it is not a criticism, but simply the issuance of a point of view, but also helps us understand where we may disagree.
(i read through your link btw - thanks!)
The core premise behind much of complexity based research is the evaluation around ontology, or causality. Where many see a mechanistic, Newtonian world that can be understood given enough study and investigation, others, like myself, see only probabilities and dispositionality. These probabilities are distinctly non-predictive in complex systems. The world-view models can be differentiated between ‘deterministic and reductionist’ and ‘non-deterministic’. I’m squarely in the latter camp. It doesn’t mean that Mechanism Design doesn’t have utility, only that i recognize limitations in how effective its modeling can be in predicting outcomes.
For instance, the mathematical notation used to measure a system’s utility and the desire that it exceeds the cost is very troublesome. (in the embedded paper attached to your article link)
- This doesn’t account for hierarchical benefits through multiple systems. A system can never be evaluated within itself, but only in the system harbouring it. A car for example, is a system. It has a goal however, which can ‘never’ be determined by reductively examining a car. Only when you zoom out to scan transportation systems, supply/demand, commuting, freedom of travel, do you being to understand what a car truly represents. Measurement of this network effect is impossible in a complex system.
- Humans make everything ‘complex’, even in a DAO with immutability built into it. This forum, governance discussion around the first proposal, is proof of that. If you widen the system to include economic modeling, threat modeling (SEC coming in and poking around), viability of the crypto-anarchist world view given current political environment, utility of DeFi, general prosperity of crypto natives and non-natives, etc… The point of this list is its length. You begin to see that variables contributing to outcomes are non-deterministic. The problem with attempting to place too much value on modeling is that you restrict the possible outcomes to the number of variables the model can account for. This is my fundamental issue with these approaches.
- Many will claim success when they find a ‘use case’ to demonstrate that a certain type of modeling worked, but this is when i will fondly recall George Box’s quote (famed statistician)… “all models are wrong. some are simply more useful than others”. How i interpret this is to concede that modeling the universe or human behaviour is impossible. Worthwhile? Yes! But only when you recognize the limits. As a panacea it becomes a blunt weapon that dulls understanding.
I thank you for giving me the benefit of the doubt with
When i said i didn’t care about the ‘specific’ governance changes, it is only because i care far more about the ‘governance engine’ itself. I’m always zooming out to evaluate double-loop thinking, or problem dissolution - not resolution. Dissolution is when you ‘reframe’ the system to ensure the problem never returns. In other words, i am challenging the DAO foundation, and privileging that concern over the individual proposals that operate on top of that framework. So, hopefully you now see elevated concern where you may have seen a lack of it earlier.
I agree with your suggestion to marry the methods of inquiry together (mechanism design with complexity). If i knew exactly how i would be writing papers about it. This helps reveal that what we are doing is fundamentally ‘experimental’ in nature. Humans simply have never successfully implemented this type of social organization before. If we maintain that perspective on ‘experimentation’, we are likely to be surprised by ‘beneficial’ outcomes. However, if we remain mired in trying to make the existing framework function, we limit the variation in responses, and almost ensure our demise. So, a round-a-bout way to answer your question
It’s a mindset first and foremost. To accept that we cannot possibly predict outcomes through design, only ‘nudge’ towards our north star (goals). Once you accept the interminability of outcomes, then we can skillfully use ‘mechanism design’ to tinker with the gears and pulleys to catalyze change.
Finally, most people are unaware that evolution (nature’s mechanism for adaptation) works mostly through exaptation, which is impossible to model. For example, the wings of a bird are an exaptation. The reason is that feathers were initially evolved to provide warmth - not flight. The subsequent adaptation to flight would only have happened if feathers had evolved a utility for that species. Most of evolution works in this manner. An experiment is not strictly useful because it proves or disproves a hypothesis. An experiment in complexity-based science, is useful because it catalyzes non-predictive changes, many of which may result in a beneficial adaptation. Trying to ‘stabilize’ Uniswap governance is the equivalent of slowing adaption and removing any possibility of a beneficial exaptation. In my philosophical model, this is deadly.
(we should create a separate post to continue the discussion - i fear we are going well beyond the OPs intent - in Meta-Governance? really enjoy the honest dialog)
im not sure of understand all but some explanation found on @VitalikButerin website my be according with your sentence and can guide us…
So what’s the alternative? The answer is what we’ve been saying all along: cryptoeconomics . Cryptoeconomics is fundamentally about the use of economic incentives together with cryptography to design and secure different kinds of systems and applications, including consensus protocols. The goal is simple: to be able to measure the security of a system (that is, the cost of breaking the system or causing it to violate certain guarantees) in dollars. Traditionally, the security of systems often depends on social trust assumptions: the system works if 2 of 3 of Alice, Bob and Charlie are honest, and we trust Alice, Bob and Charlie to be honest because I know Alice and she’s a nice girl, Bob registered with FINCEN and has a money transmitter license, and Charlie has run a successful business for three years and wears a suit.
Social trust assumptions can work well in many contexts, but they are difficult to universalize; what is trusted in one country or one company or one political tribe may not be trusted in others. They are also difficult to quantify; how much money does it take to manipulate social media to favor some particular delegate in a vote? Social trust assumptions seem secure and controllable, in the sense that “people” are in charge, but in reality they can be manipulated by economic incentives in all sorts of ways.
Cryptoeconomics is about trying to reduce social trust assumptions by creating systems where we introduce explicit economic incentives for good behavior and economic penalties for ban behavior, and making mathematical proofs of the form “in order for guarantee X to be violated, at least these people need to misbehave in this way, which means the minimum amount of penalties or foregone revenue that the participants suffer is Y”. Casper is designed to accomplish precisely this objective in the context of proof of stake consensus. Yes, this does mean that you can’t create a “blockchain” by concentrating the consensus validation into 20 uber-powerful “supernodes” and you have to actually think to make a design that intelligently breaks through and navigates existing tradeoffs and achieves massive scalability in a still-decentralized network. But the reward is that you don’t get a network that’s constantly liable to breaking in half or becoming economically captured by unpredictable political forces.
I think we’re very much on the same page. Mechanism design is simply the thoughtful consideration for structuring these (crypto)economic incentives.
I like the idea of integrating that with complexity and exaptive systems “design” because one approach is very much in media res and the other is a priori. We can experiment and rapidly iterate our responses to emerging complex problems, while informing ourselves to avoid introducing perverse incentives.
Sounds like you’re very much in alignment with this approach!
so much to process…
consensus was solved by satoshi at the intersection between cryptography and economics. but consensus of what?
“consensus of record” - trust in the integrity of historical events
what a better way to decentralize ‘history’. cryptography gives us the tech, and economic incentives through PoW gives us the feedback mechanism that hurts the most - money.
governance is different however. it is forward looking and experimental in nature. the consequences of decisions are unknown a priori. linear causality was taken to the woodshed. governance in this context is the ‘extraction of potential future financial rewards’. but is that really true? as a delegate who is responsible for the health of the system they are overseeing, how can they use cryptoeconomics to form good decisions, either as an individual or a cartel? long-term thinking occasionally requires short-term sacrifices, or investments, in order achieve a benefit. ‘benefits’ often rely on value systems that go far beyond ‘financial rewards’, and also feature ethical considerations like sustainability, fairness, lawfulness, etc.
i’m barely able to maintain a coherent thread, partially because this exploration into human social design is so new. let’s try and simplify. there are 2 modalities here.
system of record
and proof using decentralized mechanisms (tech and incentives OR cryptoeconomics)
system of adaptation or advance
and proof of value using retroactive mechanisms of a posteriori observation and study (PDCA)
#2 may very well become perverted if you attach economic incentives, because instead of trying to improve the system, delegates will engage in plutocratic behaviour to improve their economic standing. so instead, we have a voting-delegate system that relies on a different incentive paradigm - that of shared values. so, it’s no wonder that it also becomes subject to all the human vices that go with it.
Cryptoeconomics is not the answer for governance. It can’t be. The structure for human social advancement cannot be solved via the lens of economic constraint. in the end i offer no alternative except to encourage frequent experimentation, frequent failure, because the fuel for emergence is change. if agents in the system have the common good in mind, then i remain confident that we will incentivize beneficial adaptations. the essential question is and remains, how to incentivize this behaviour?
we need something new. perhaps cryptoanthroeconomics. the intersection of cryptography, humanity, and economics is likely to offer clues. cryptoeconomics was devised to ‘avoid’ human tendencies. it has limitations because, unfortunately, you can’t design the human out of the system.
Good points all around.
Economic incentives are insufficient. Even if you accept the premise in theory (I don’t) that market solutions can fix any problem, you have the issue of the human and all the unknown unknowns that humans bring to the table. While we don’t have a historical analog for DAOs and decentralized governance, we have numerous examples of the failure of the free market in the face of human ingenuity.
Ultimately, humans (for now) control the system parameters, and that means that a.) a complex system will display emergent behavior, heuristics for which are non-existent and/or ineffable a priori, and b.) in the final analysis, none of those parameters for the system exist in a vacuum.
On a practical note, to address the plutocratic equilibrium that everyone seems to be concerned about–
We can introduce a sunset/expiration date on delegated votes. In another thread, we’ve been discussing the necessity of allowing “undelegation” at will, but as a check on delegate power, we can find a balance between allowing for delegated/minimal agency voting (ie, individual holders don’t have to invest time and energy in following governance minutiae, and simply give their voting power to delegates they trust) and the perverse incentives for delegates towards plutocracy.
I am unable to read through all of the comments, so forgive my digression. I do think the discussion on cryptoeconomics is productive, but I also think we should take a few practical steps to get the threshold vote re-voted on and passed. Until then Uniswap governance is at a standstill.
It seems like one of the initial reasons people voted NO against lowering the thresholds is that the ‘quorum threshold’ and ‘proposal threshold’ should be two separate votes.
Furthermore, ‘Retroactive Proxy Contract Airdrop — Phase One’ has now failed to pass (which seemed to be a major topic of discussion / fear of many in this thread)
Is there a process to re-submit ‘quorum threshold’ and ‘proposal threshold’ as two separate votes?
Holy shit, I just happened across this post today again, and I missed a HUGE point:
You said this, but I’m reiterating for my and anyone else’s benefit:
Quorum is % of total supply, but only votes that have been delegated prior to any proposal being submitted (self-delegated, or to someone else) can participate.
This is akin to bullshit voter registration requirements, and even more bullshit party registration requirements, in the United States. Not to mention that delegation incurs a gas fee (not a substantial one, but still… we’re already solidly in the plutocratic sphere by making 1 $-denominated shared = 1 vote, so it’s kind of insult to injury).
On the flipside, you do want some sort of threshold of the total supply to be involved in the governance process.
But functionally, you have scenarios where it is mathematically impossible to achieve quorum based on the levels of “voter registration”.
A couple thoughts on this:
- Make the quorum threshold a function of the number of delegated votes
- In tandem with 1.), require some minimum number of delegated votes required for any proposal to be proposed in the first place.
- Delegation has a sunset provision, ie, lasts only for a given number of votes or a given period of time (I lean towards number of votes)
I’m not super thrilled about point 2, but I’m picturing something with a delegation threshold that was proportional to the amount of votes delegated, ie, if a bunch of votes are delegated, the quorum threshold is high, if not, the quorum threshold is low, with a system minimum established based on the required delegation count. Could borrow from Vitalik’s quadratic voting to temper some of the edge cases (ie, get the bare minimum votes delegated to propose, keep the voter participation super low to be able to pass something easily).
I’m not so sure.
From our previous conversations, I think I know what complexity would tell us about this: you can’t control every edge case. You can’t mitigate every governance risk. Be adaptable instead. And I agree with that, fundamentally, but making quorum a function of the total supply, while large chunks of the supply are locked up, and participation is limited to delegated votes (so some fraction of the fraction)… that limits our ability to adapt substantially.