AI won't make you stupid per se, but if you become dependent on it, if it becomes your go-to place to delegate anythin to before you use your own brains, then this is such a comfortable position that you will likely stay there, and that may erode both your actual abilities as well as your own self-confidence in them.
Keeping your own brain trained (just by using it) is one thing, but there's another angle to it: because LLMs sound so self-confident, people tend not to question the responses' correctness, which are often random garbage (there's an actual random number generator involved in creating the answer, e.g. check the parameter "temperature" in ChatGPT's docs).
We need to stay in control, and that requires being able to check if a system's answer can even be true.
Example: I know people who type 34+23 into ChatGPT out of laziness. Keep doing that for two years, and you won't even try anything else, it will have become a habit.
I'm bad at mental arithmetic, but even I can add this up to 57 faster than anyone can type it into ChatGPT, which costs enormous energy to respond to that query which is more suitable to a pocket calculator (Linux/UNIX: bc -l is your friend).
Could we glimpse more about this possible future by looking at rich people who grew up with (human) personal assistants at their beck and call? Alas, not a group that commonly volunteers for psychological studies.
Me, I'm pessimistic. I think the key problem is the kind of stupid: People won't just offload math problems or memorizing the capital of Assyria, but executive function.
Then we'll be caught flat-footed by wilting plants, the machine will suggest Brawndo, and people will nod along because craving electrolytes sounds reasonable.
AI won't make you stupid per se, but if you become dependent on it, if it becomes your go-to place to delegate anythin to before you use your own brains, then this is such a comfortable position that you will likely stay there, and that may erode both your actual abilities as well as your own self-confidence in them.
Keeping your own brain trained (just by using it) is one thing, but there's another angle to it: because LLMs sound so self-confident, people tend not to question the responses' correctness, which are often random garbage (there's an actual random number generator involved in creating the answer, e.g. check the parameter "temperature" in ChatGPT's docs).
We need to stay in control, and that requires being able to check if a system's answer can even be true.
Example: I know people who type 34+23 into ChatGPT out of laziness. Keep doing that for two years, and you won't even try anything else, it will have become a habit. I'm bad at mental arithmetic, but even I can add this up to 57 faster than anyone can type it into ChatGPT, which costs enormous energy to respond to that query which is more suitable to a pocket calculator (Linux/UNIX: bc -l is your friend).
https://archive.ph/AFwsW
Could we glimpse more about this possible future by looking at rich people who grew up with (human) personal assistants at their beck and call? Alas, not a group that commonly volunteers for psychological studies.
Me, I'm pessimistic. I think the key problem is the kind of stupid: People won't just offload math problems or memorizing the capital of Assyria, but executive function.
Then we'll be caught flat-footed by wilting plants, the machine will suggest Brawndo, and people will nod along because craving electrolytes sounds reasonable.