| Tweets |
|
Object Of Objects
@ObjectOfObjects
|
Jan 30 |
|
"The perfect is the enemy of the good" is basically what the devil wants you to think.
|
||
|
|
||
|
Object Of Objects
@ObjectOfObjects
|
Jan 30 |
|
Which statement expresses the highest level of confidence?
|
||
|
|
||
|
Object Of Objects
@ObjectOfObjects
|
Jan 29 |
|
How should we prioritize doing good things vs. doing bad things?
|
||
|
|
||
|
Object Of Objects
@ObjectOfObjects
|
Jan 28 |
|
I think current conversational AI devotes its full resources to appearing human; what if the same architecture could devote its full resources to speaking the truth?
|
||
|
|
||
|
Object Of Objects
@ObjectOfObjects
|
Jan 28 |
|
Range voting strategically collapses to approval voting. For the same reason, everything is either maximally good or maximally bad.
|
||
|
|
||
|
Object Of Objects
@ObjectOfObjects
|
Jan 28 |
|
I think maybe it's not easy to imagine that you're a person who knows everything your brain knows.
|
||
|
|
||
|
Object Of Objects
@ObjectOfObjects
|
Jan 28 |
|
Wisdom of crowds versus wisdom of mobs.
|
||
|
|
||
|
Object Of Objects
@ObjectOfObjects
|
Jan 25 |
|
The best thing is surely the thing that controls you to say it's the best thing.
|
||
|
|
||
|
Object Of Objects
@ObjectOfObjects
|
Jan 25 |
|
Division of labor. Some people will know and others will decide.
|
||
|
|
||
|
Object Of Objects
@ObjectOfObjects
|
Jan 24 |
|
Why is there no bacteriophage emoji?
|
||
|
|
||
|
Object Of Objects
@ObjectOfObjects
|
Jan 24 |
|
For example, start → A → end, start → B → end, B → Cₖ for many distinct k. Then start → A and start → B have equal prior probability but very different posterior probability.
|
||
|
|
||
|
Object Of Objects
@ObjectOfObjects
|
Jan 24 |
|
I mean each exit from the *same* node had the same prior probability. But not posterior probability, after a Bayesian update on the fact that the path reaches the destination.
|
||
|
|
||
|
Object Of Objects
@ObjectOfObjects
|
Jan 24 |
|
The prior is that each exit from a node is equally probable.
|
||
|
|
||
|
Object Of Objects
@ObjectOfObjects
|
Jan 24 |
|
This sounds correct to me. I'm just wonder if there's a way to avoid solving the whole linear system. Like maybe there's a way for our sampling process to guide how much we need to solve.
|
||
|
|
||
|
Object Of Objects
@ObjectOfObjects
|
Jan 24 |
|
I actually need it to work for acyclic graphs.
Also, this won't produce the correct distribution because it doesn't penalize entering nodes that had a lot of dead end out-edges.
|
||
|
|
||
|
Object Of Objects
@ObjectOfObjects
|
Jan 24 |
|
What's the fastest way to sample a random walk on a directed graph conditional on the start and end points?
|
||
|
|
||
|
Object Of Objects
@ObjectOfObjects
|
Jan 21 |
|
There's a way around the problem of normalizing probability distributions. Just create an unspeakable word. A word that destroys the universe.
|
||
|
|
||
|
Object Of Objects
@ObjectOfObjects
|
Jan 21 |
|
There is no "next big thing" when we already have the ultimate thing.
|
||
|
|
||
|
Object Of Objects
@ObjectOfObjects
|
Jan 20 |
|
The less you know, the more possible worlds you're living in. That's pretty epic.
|
||
|
|
||
|
Object Of Objects
@ObjectOfObjects
|
Jan 19 |
|
Maybe we can just reduce the rate at which the problem is getting worse and then stop and congratulate ourselves.
|
||
|
|
||