The AI will not replace air traffic controllers yet.



AI can make many things, but that thing is that tool is just a tool. The AI will not replace air traffic controllers yet, but there is a possibility. That it can make it somewhere in the future. When AI makes mistakes like producing images of black nazi soldiers, we can see that also AI makes mistakes. Or actually, humans make mistakes. When we try to calculate the point where we are in seven steps on AI, we are just in level three. And the problem is that the AI learns things differently than humans. 

The AI has no imagination. It cannot act in situations where it is not programmed. The learning AI uses descriptions as keyholes. And data that comes from the sensor is the key that activates action. The problem is how the AI selects data that it stores in a database. In this case, the human or operator/programmer selects the data. That is stored in databases. The AI can record information that it makes during its mission. Then operator selects the film periods that it stores. And after that, the system must have some action that is connected with that thing. The big problem is that in natural situations the cases are not similar. 

The AI could replace civil pilots in a couple of years. But the problem is that the most of pilot's training involves preparations for special and emergency missions. And the computer program must have a response for all of those things. That it must react. And that thing causes the so-called green skirt problem. If the car drives with autopilot, the autopilot must make a difference between the green light and the green clothes. If the AI does not separate those things, it can push the gas pedal if it sees a green skirt. 


There is a theorem that the civilization that reaches Kardashev scale 3 and sends its probe to another solar system can accidentally destroy some other civilization because the creators of the probe's AI forgot the breaking algorithm. Then the system will simply forget to break before it starts the approach and landing procedures. Then the system will drive 30000 tons of probe impact to the planet with speed. That is 0,5% of the speed of light. This impact causes terrible explosions. 

And we want to create a curve in that process, it follows the limit curve in mathematics. The limit means that the calculation approaches the zero point. But it never reaches that thing. In AI this geometrical learning curve means that the AI can go very close to the human level of consciousness. But it never reaches that level. The thing is that. The AI participates in the R&D process that is used to advance AI development. And that thing means that the R&D process accelerates. 

But the thing is that the cognitive AI is partially true. In cognitive AI the system searches for the things that happen at the front of its sensors. Then it tries to find the machine with details that are stored in its databases. So details that its sensors get are the key that activates the databases. Then the database controls the system's physical operations. 

https://scitechdaily.com/new-technique-significantly-improves-the-search-for-signals-from-distant-alien-civilizations/

https://www.freethink.com/robots-ai/ai-air-traffic-control

https://en.wikipedia.org/wiki/Limit_(mathematics)

https://setiandfermiparadox.wordpress.com/2024/03/01/the-ai-will-not-replace-air-traffic-controllers-yet/

Comments

Popular posts from this blog

Researchers created fully functioning mini brains using bioprinters.

Researchers found a "pause" button that can stop the human embryo advance.

The Pentagon published its new UFO report and found no evidence of the alien connection.