Why is the general design process iterative, and does it involve both conception and verification?
I believe that the future trend in AI will focus on solving inverse problems through trial and error.
For example, the AI Scientist is an excellent illustration of this, leveraging the LLM's ability to devise solutions while automating problem-solving through iterations of conception and verification.
Other examples include evolutionary programming and OpenAI o1's new approach to reasoning.
To grasp this trend effectively, I intend to thoroughly explore the topic at hand. Additionally, I will touch on OpenAI o1's inference capabilities towards the end.
How to create the desired result? That’s the issue.
One of the core challenges in any design process is how to create the desired result. This is the fundamental issue that drives design work.
Generally, a design act refers to the conceptualization of how to achieve specific purposes. The derived approach used to achieve these purposes is typically called a 'solution'.
‘Cause - result’ relationship
At the core of the design process is finding the cause that leads to the desired purposes.
Thus, a design act involves devising various candidate methods that could lead to the desired outcomes. If a method successfully achieves the purposes, it is considered a 'solution'.
Trial and error is only the way of both conception and verification
The cause-result relationship between 'solutions' and 'results' can be explained as follows.
There are typically multiple solution candidates for any given set of purposes. If a candidate meets all the criteria, it is regarded as a 'solution'.
In fact, there is no straightforward way to derive a cause from the result. The process involves repeatedly generating ideas and verifying whether those ideas lead to the desired outcome.
Appendix: Why the quality of inference improves the more time is spent on reasoning?
On September 17, 2024, OpenAI introduced o1, which they describe as "an AI model designed to take its time and carefully think before responding.
https://openai.com/index/introducing-openai-o1-preview/
Additionally, it has been reported that the quality of inference improves the more time is spent on reasoning.
https://openai.com/index/learning-to-reason-with-llms/
This led me to the following hypothesis:
I believe this trend does not significantly manifest in forward problems.
However, when attempting to solve inverse problems through a cycle of candidate extraction and verification, it seems quite natural that spending more time would result in greater accuracy.
The reason I believe this is because the process involves iterative cycles of conception and verification.
この記事が気に入ったらサポートをしてみませんか?