見出し画像

CRAZYTANK News vol.231/Becoming AI. Reflections on an Experiment on the Value of Humans Beyond

Notice.
In the near future, this CRAZYTANK newsletter will be published in one of our member-advised venues or transitioned to a paid version. We will let you know when the official date is set. Until then, please enjoy the free version.

ーーーーー

The other day was Christmas! We are sure that many of you are pushing various tasks toward the end of the year. There seem to be a number of infectious diseases going around, so please take care of your health and enjoy the last week of 2024, even in the midst of your busy schedule!

We at CRAZYTANK have repeatedly reported on our predictions for the future of AI, but 15 years ago, in 2009, our CEO, Mr. Takehana, worked on an architectural work in which he attempted to “become an AI”.

What does it mean to “try to be AI”? And what was the result?

We have already mentioned that in some of Takehana's articles in the past 2018, but we would like to mention it again in this communication.

Please take a moment to read it.


Becoming AI. Reflections on an Experiment on the Value of Humans Beyond


Humans find existential value in what they think.

It may be obvious, but we humans place great importance on the value of what we conceive and think.

“I think, therefore I am,” said the philosopher Descartes, and humans believe that the value of our ideas is equal to the value of our existence.

Since AI can derive results from these “ideas” and “thoughts” in an instant, it is not surprising that every time humans use it, they find it convenient but feel their existence value is threatened.

However, that does not mean that it is already impossible for AI to stop evolving in the future.

That is why we believe that in order for humans to coexist with its existence, we must properly learn about AI itself, and “think” and “act” on how to use it ourselves, which is where the value that humans should have originally cherished remains.

Abandon human greed and ego and work on design using only data (become AI).


In 2009, 15 years ago, CRAZYTANK President Takehana, in the midst of predicting the subsequent evolution of artificial intelligence (AI), undertook an experiment in which he himself became an AI.

A graduate of the architecture department, Takehana worked for a design firm before going independent, and was asked to design a house for one of his clients.

What we did there was to “abandon our ego and manifesto as architects and design a house with only the client's information (data).

This is exactly what AI should do.

AI has no ego in itself (at this point). It is a technology that derives optimal solutions from a large amount of big data at the very cutting edge.

When human feelings and thoughts enter into it, it can become a human ego.

So, how did Takehana tackle “designing by trying to be an AI”?

That is described in this article,

The title is, “An Attempt at Becoming an Artificial Intelligence x The Architecture of OS Thinking x How to Grow After It's Given.”


This was achieved through a process of repeated modeling that took more than two years, during which we spent several days with the client in each of the four seasons, and incorporated the various information we obtained (lines of daily life, conversations, family behavior patterns, etc.) into the models.

When making the model, he eliminated all architectural rules and personal thoughts, and only included data, so at first the model was not architecturally “valid”.

(If you are reading this newsletter and have used any of the generative AIs currently available in the market, we are sure you will understand that the phase of “it does not stand up” or “it does not feel right” is exactly what we have experienced.)

We do not worry about the fact that the model is not “established,” and we continue to collect data, repeating the process of making a model each time, presenting it to the client, and then adding the information we have obtained again to the model.

As a human being, we would feel uneasy when an “unrealistic” model is completed, and in the process of showing it to the client, we would feel “I don't want to get angry,” or I would take action based on the thought of “somehow making it work,” and so on. However, Takehana did not do this (because this was an attempt to become an AI), but instead repeatedly obtained only data and rebuilt the model each time.


What happened as a result speaks to what we are currently experiencing with AI


In what has happened as a result we can see what will happen (or has already happened) in the future evolution of AI.

What happened as a result is that at some point, the model suddenly became “architecturally constructed”. Again, without any architectural rules or feelings, it happened suddenly in the process of repeated data acquisition and model design.

This is the same phenomenon as, for example, an image generation AI that was initially thought to be “not viable” or “what comes out is weird,” but once it reaches a certain point, it starts presenting only images that are not uncomfortable at all.

In the field of generative AI, since the appearance of chatGPT in November 2022, many people may have experienced that various generative AIs were at a stage where they were considered “not viable,” but after a few days, weeks, or months, they reached the “viable” phase, and this is exactly what happened. same thing was happening in this experiment.

So, for the client (human), how was the architecture created only with data...?

・The children were able to run around in the house for the first time without bumping into the walls.

・The client told us, “For some reason, I feel like I am having a conversation with my old self.

By abandoning the ego as an architect and designing only with data, it is fair to say that the result is architecture that reflects the true needs of the client.

In addition, with the evolution of AI, it will be possible to calculate and arrange building materials according to drawing data without producing waste materials, and not only at the design planning stage. This will make it possible to build buildings that reflect the client's needs with the highest environmental performance.

Even then, can we say that our own ideas and thoughts are the most important?

Can we say that our own ego can truly function for the sake of the global environment and human happiness?

In the process of AI evolution, human beings will face their own egos and their own existence value, and many of them will be troubled and suffer.

At CRAZYTANK, we continue to work on “creating the boundary between technology and humans” with an eye toward that future. We are moving because we believe that if the evolution of AI cannot be stopped, we have no choice but to create new fields ourselves so that human beings can find their own richness as human beings in different fields than before.

We believe that we need to recognize straight away that this is not something that cannot happen in our own industry or field, but something that can happen to anyone, and work on what we can do now in our respective fields.

If you have a positive attitude toward this issue, we would like to have a discussion with you and Crazy Tank.

We can do something together. We hope to work with as many people as possible.

We have written a similar article about the future forecast that was newly published yesterday. Please take a moment to read it.

The title is, ”the future of “Hey, human, you can't touch me,” as I found out when I became an artificial intelligence."


いいなと思ったら応援しよう!