https://github.com/kyegomez/EXA#for-humanity
https://blog.apac.ai/liberation-awaits
EDIT: the author seems to be releasing poor implementations of recent papers in an attempt to drive attention towards an AI-related death cult.
That's a reference to Warhammer 40k, a popular miniatures wargame from Games Workshop. Their quote is
In the grim darkness of the far future, there is only war.
It could be kind of satirical, if only to link recent events with the ideas of
* future technology as impossibly obscure
* a psionic emperor who consumes minds
to protect humankind from cosmic
terrors
* tech-priests, who maintain ancient tech
* "machine spirits," who must be appeased
Why the theological meta discussion at all?
Is the thing he talks about actually working, is it improving AI output like he claims, or not?
"that Elevates Model Reasoning by atleast 70% "
I am doubtful, but I don't have the tools to investigate it on my mobile, but this is the debate I would like to read about and not potential obscure believes of the developer.
I think if you could find a way to add better contexts and memories, and combine some LoRA to perfect a model on a specific vertical, you could essentially have a (nearly) full AGI topically that essentially is an expert and doesn't hallucinate(mostly)... maybe a 2 to 3x multiplier on gpt4. I mean I'm a year it'll probably be even more insane what's available.
look at the transition of Midjourney v1 to v5 in a single year.
It's been a wild year for ai. the experiment where they hooked a bunch of Sims up together with ai, also used something similar to this I think, in creating thought chains from multiple agents.
Tldr: crazy or not, the idea of using a branching system to get better results does make some sense, so it's not completely bunk or anything, IMHO. At least the concept, can't speak for this specific implementation.
Edit: I guess, I skimmed and misread the room. I was thinking this guy was part of the original paper and implementation. he's not, which does award him more skepticism etc. My bad.
What I believe they're doing is feeding papers to a LLM as soon as they come out in order to get a repo they can advertise. Once someone releases a working implementation they just copy it over.
I was able to generate almost identical code to what they released by giving chatgpt pseudocode copied verbatim from the original paper.
The industrial revolution massively changed the world and the speed at which its changes occurred were positively slow compared to what we can do today. Imagine you could develop the steam engine then press a button and they could print one out in India, the US, and France in hours. WWI would have looked a lot different, as in it would have been even bigger in scope.
And an unchained LLM trained on reality is far more capable of finding solutions to that problem than a bunch of squabbling politicians.
Not that I disagree with this statement, I don't, but this is not a silver bullet. Technology is, ultimately, operated by humans and no amount of frontier research and development can overcome collective action problems. At some point, you do have to sit down with these stupid politicians and get everyone on board. The loom was invented hundreds of years before the industrial revolution, in fact it was nearly forgotten and the designed survived due to a few happy accidents. It was only after the English Civil War and the establishment of checks on royal power that widespread adoption was possible.
In response to the coming apocalypse, this isn't the first time everyone has a vague sense of potential doom about the future. I believe this happens during any time of fundamental change, making the future uncertain which we interpret as apocalyptical. Back during the 30 years war that apocalyptic belief manifested as God being angry with us, today it's with the (very real) problems our rapid industrialization has created. Not to minimize the problems that we face - well minimizing only in that they probably won't lead to extinction. The various predictable factors mentioned have the potential to make life really shitty and cause massive causalities.
While framing these issues as a matter of extinction may feel like a way of adding urgency to dealing with these problems, instead it's contributing, on an individual level, to fracturing our society - we all "know" an apocalypse is coming but we're fighting over what is actually causing that apocalypse. Except that there will be no apocalypse - it's just a fear of the unknown, something is fundamentally changing in the world and we have no idea how the cards will land. It's no different than a fear of the dark.
I cannot assure you that we won't have something like a nuclear apocalypse in the next few decades, and here you are certain it's not going to happen. How can you be assured of this future when the underlying assumptions of things like value of labor will be experiencing massive changes, while asset inflation is on an ever increasing spiral up.
> If we don't reach at least Kardashev scale 1 in the next hundred years or so, we're going to go extinct due to several now-predictable factors.
Many people are certain of human extinction for one reason or another, it doesn't sound like you're one of them. I'm saying that we don't know what the future will bring, and that uncertainty manifests as apocolyptic thinking. I also specifically mentioned that we are facing multiple problems that can cause huge devastation and I'm not making the argument that "Oh hey everything is ok!" Just that to frame things as apocalyptic is contributing to the schism and preventing us from doing anything because everyone refuses to listen to anything else since they believe their lives are at stake.
I guess I shouldn't say "it won't be extinction", but that's way way way lower probability than people think. It's just that a massive amount of people have thought the world would end many times through out history, so I'm skeptical of "well this time we're RIGHT".
Sounds like someone doesn't like their job.
The whole post is amazing -- it reads like stereotypical cult propaganda straight out of science fiction. I definitely expect they'll one day be posting about how we can digitize our consciousness à la "Scratch" from that one Cowboy Bebop episode [1].
We're radically devoted to Humanity.
And we're not an ai related death cult. We're Human first, AI is simply an means to an end
1) Does your belief system include an obligation to tell the truth?
2) Why do you capitalise humanity and human?
3) Can you give a good argument against your own beliefs? What's the best argument that the basic premise of the things you are saying is wrong?
In it he details the possibility of AI being used to create new religions that are so powerful and persuasive that they will be irresistible. Consider how QAnon caught on, despite pretty much anyone on HN being able to see it as a fraud. Most people are thinking about how AI will impact politics but I am really interested in how it will impact spirituality.
I've been rabbit-holing on last centuries New Age cult scene like Manly P. Hall and Rudolph Steiner. Even more respectable figures like Alan Watts were involved in some ... interesting ... endeavors like Esalen institute.
We are over-due for a new kind of spirituality. My bet is that AI is going to bring it whether we want it or not.
1. https://www.youtube.com/watch?v=LWiM-LuRe6w&ab_channel=Yuval...
Is it basically a reimplementation using Guidance instead of openai's API directly?
in any case, what a player tho... learning AND obtaining clout?
prompt = f"Given the current state of reasoning: '{state_text}', pessimitically evaluate its value as a float between 0 and 1 based on it's potential to achieve {inital_prompt}"
prompt = f"Write down your observations in format 'Observation:xxxx', then write down your thoughts in format 'Thoughts:xxxx Given the current state of reasoning: '{state_text}', generate {k} coherent solutions to achieve {state_text}"
prompt = f"Given the current state of reasoning: '{state_text}', pessimistically evaluate its value as a float between 0 and 1 based on its potential to achieve {initial_prompt}"
self.ReAct_prompt = "Write down your observations in format 'Observation:xxxx', then write down your thoughts in format 'Thoughts:xxxx'."
prompt = f"Given the current state of reasoning: '{state_text}', generate {1} coherent thoughts to achieve the reasoning process: {state_text}"
prompt = f"Given the current state of reasoning: '{state_text}', evaluate its value as a float between 0 and 1, become very pessimistic think of potential adverse risks on the probability of this state of reasoning achieveing {inital_prompt} and DO NOT RESPOND WITH ANYTHING ELSE: OTHER THAN AN FLOAT"
prompt = f"Given the following states of reasoning, vote for the best state utilizing an scalar value 1-10:\n{states_text}\n\nVote, on the probability of this state of reasoning achieveing {inital_prompt} and become very pessimistic very NOTHING ELSE"
self.ReAct_prompt = '''{{#assistant~}}
{{gen 'Observation' temperature=0.5 max_tokens=50}}
{{~/assistant}}'''
There are also some system prompts: https://github.com/kyegomez/tree-of-thoughts/blob/732791710e...- keep track of todo items
- assist with progress
- check in on mental + emotional state
and down the road
- keep track of state over time
- give feedback/make observations
The paradigm shift is having it contact us, instead of the other way around. The ToT model has 1 additional parameter on top of the LLM - probability of success. What would the parameters be for a more open-ended conversation?
Engagement! Just like social media!
IDK if our current models have enough of "mode 1" to power this system. It's also plausible that our current "mode 1" systems are more than powerful enough and that inference speed (and thus the size/depth of the tree that can be explored) will be the most important factor.
I hope that the major players are looking at this and trying it out at scale (I know Deepmind wrote the orginal paper, but their benchmarks were quite unimpressive). It's plausible that we will have an AlphaGo moment with this scheme.
I think the first order of mag will be in tree of thought processing. The amount of additional queries we need to run to get this to work is at least 10x, but I don't believe 100x.
I think the second order of mag will be multimodal inference so the models can ground themselves in 'reality' data. Saying, "the brick layed on the ground and did not move" and "the brick floated away" are only deciable based on the truthfulness of all the other text corpus it's looked at. At least to me it gets even more interesting when you tie it into environmental data that is more likely to be factual, such as massive amounts of video.
As this gets explored further, I believe we will start finding out why human minds are constructed the way they are, from the practical/necessity direction. Seems like the next step is farming out subtasks to smaller models, and adding an orthogonal dimension of emotionality to help keep track of state.
In particular, it jumps out that a “ranking model” (different, I think from current ranking models) to judge which paths to take and which nodes to trim would make some level of sense.
Better lucky than good!
(also, man he's awesome. How does he have such a strong grasp on all of the topics in the field?)
Wow. Lick, don’t sniff, the fresh paint.
If everyone is dead, you don't have to worry about death, or any of those other pesky hard to solve problems!
The research itself [1] seems legit. The paper author also wrote a paper called ReAct [2], which is one of the core components of the langchain framework.
* [1] https://arxiv.org/abs/2305.10601 * [2] https://arxiv.org/abs/2210.03629
> Large Language Model Guided Tree-of-Thought > In this paper, we introduce the Tree-of-Thought (ToT) framework, a novel approach aimed at improving the problem-solving capabilities of auto-regressive large language models (LLMs). The ToT technique is inspired by the human mind's approach for solving complex reasoning tasks through trial and error. In this process, the human mind explores the solution space through a tree-like thought process, allowing for backtracking when necessary. To implement ToT as a software system, we augment an LLM with additional modules including a prompter agent, a checker module, a memory module, and a ToT controller. In order to solve a given problem, these modules engage in a multi-round conversation with the LLM. The memory module records the conversation and state history of the problem solving process, which allows the system to backtrack to the previous steps of the thought-process and explore other directions from there. To verify the effectiveness of the proposed technique, we implemented a ToT-based solver for the Sudoku Puzzle. Experimental results show that the ToT framework can significantly increase the success rate of Sudoku puzzle solving. Our implementation of the ToT-based Sudoku solver is available on GitHub:
I don't recall whether it was this paper, or another that I read that talks about using the LLM's ability to also show the probabilities of each token to measure the validity of the particular completions. However that isn't exposed in the OpenAI chat APIs (GPT-Turbo-3.5 / GPT-4), just the completions APIs (Text-Davinci-003 etc.)
https://github.com/jieyilong/tree-of-thought-puzzle-solver
"Large Language Model Guided Tree-of-Thought"
>For instance, in Game of 24, while GPT-4 with chain-of-thought prompting only solved 4% of tasks, our method achieved a success rate of 74%.
The answer to that is - yes, but it is: costly, slow, there is node collapse, it impacts context length, it injects biases.
Share this repository by clicking on the following buttons! <smiley face>
2023 in a nutshell.
1: https://aminoapps.com/c/neon-genesis-evangelion/page/item/ma...
https://github.com/JushBJJ/Mr.-Ranedeer-AI-Tutor/tree/testin...
I went through this in a video using the paper's official code - and it worked fairly well!
Definitely a great step forward in terms of reasoning tasks - even if it is an expensive step.