AI: thoughts, experiences, predictions
AI: thoughts, experiences, predictions
Every time I read about MCP servers I can't help think of that "other" MCP, you know the one that ran on an ENCOM 511, and I wonder how much of the naming is intentional.
Anyway, I just wanted to start a thread where people can share their ideas around the latest AI hype. Personally, I've been heavily caught up in it. For the past few years I've been making predictions to my friends about the future of AI, and so far every prediction has come true sooner than expected. But it wasn't until late last year when a friend shared a GenAI music video that I started to get very serious about the topic. The video was obviously AI generated but he couldn't really tell. That shocked me and I realized that a lot of people have huge blind spots around this stuff (including myself) so I've made it my mission to map the current AI landscape. My goal is the be the "expert" people in my circle can turn to with questions. I really believe we are in the beginning of another major technological advance that will reshape the world just like the Internet did.
Since the beginning of the year I've read several books about AI and I'm subscribed to a few newsletters in addition to the tech news feeds I've followed for years. If you can recommend sources for me to follow that would be great. My main interest has been law, policy, and education, but I do need to learn more about the inner-workings of popular tools (it's difficult, I'm bad at math and not a good programmer). Generally I spend about two hours a day reading about AI, but ironically I rarely use these tools myself and I can't write a good prompt to save my life.
Anyway, I just wanted to start a thread where people can share their ideas around the latest AI hype. Personally, I've been heavily caught up in it. For the past few years I've been making predictions to my friends about the future of AI, and so far every prediction has come true sooner than expected. But it wasn't until late last year when a friend shared a GenAI music video that I started to get very serious about the topic. The video was obviously AI generated but he couldn't really tell. That shocked me and I realized that a lot of people have huge blind spots around this stuff (including myself) so I've made it my mission to map the current AI landscape. My goal is the be the "expert" people in my circle can turn to with questions. I really believe we are in the beginning of another major technological advance that will reshape the world just like the Internet did.
Since the beginning of the year I've read several books about AI and I'm subscribed to a few newsletters in addition to the tech news feeds I've followed for years. If you can recommend sources for me to follow that would be great. My main interest has been law, policy, and education, but I do need to learn more about the inner-workings of popular tools (it's difficult, I'm bad at math and not a good programmer). Generally I spend about two hours a day reading about AI, but ironically I rarely use these tools myself and I can't write a good prompt to save my life.
Re: AI: thoughts, experiences, predictions
I am not very much interested in art created with the help of AI tools. I prefer crappy art done by humans.
OK, so one could argue that AI tools are just tools like any others, and since I will look at a marble statue done with chisel and hammer and don't insist on artists clawing them out with their teeth and fingernails, I should not dismiss AI tools.
But they're different. We completely understand how chisel and hammer work, and the artist is in control of every single stroke. AI tools are big black boxes. We don't understand how they work, nobody does. It may very well be that they are just big plagiarization machines, stealing from their training data, which usually was collected without consent. And their output, if unfiltered, is still pretty bad and bland. Everyone knows about the difficulties of image generators to produce human hands with the correct number of fingers. Microsoft is defaulting to AI translations of their help pages to German right now and... well, the tool does not understand that in tech, you simply do not translate some terms.
Of course, there is nuance. What I'm not interested in is an AI image from the prompt "Cartoon style, panda sitting on a living room couch reading a book". But a longer, more elaborate prompt? Maybe there can be art in there. But then again, why don't I just read that prompt?
For technical stuff, sure, use all the AI you can get away with. That stuff is pretty soulless to begin with, and if we're being honest, before AI, we would google our question, land on a stack overflow page where someone asked exactly that, and copy the answer from there. If we were being transparent, we'd link to the source. The AI tools are now smart enough to combine two things, you can ask "how do I convert a PNG to GIF in CMake?" and chances are, you'll get a usable answer, whereas before, you would have to decompose the problem first.
I'm personally using Perplexity from time to time. It will occasionally produce hallucinations, give advice that just does not work, or maybe used to work in older versions of whatever tool you ask about, even though you explicitly gave the version you need it for in the question. But it will often correctly say what you're trying to do is not possible, or at least not possible that way, and give workarounds. And it will cite its sources, so you can read the originals and cite them accordingly.
As I understand it, it will first use a language model to transform your question into a usable search query, execute that, then summarize the results with another language model.
Yeah, sorry, I'm not big on the tech behind it all. I know it's all basically just giant matrix on vector multiplications with discretization in between.
We also have the Moderator Control Panel here
OK, so one could argue that AI tools are just tools like any others, and since I will look at a marble statue done with chisel and hammer and don't insist on artists clawing them out with their teeth and fingernails, I should not dismiss AI tools.
But they're different. We completely understand how chisel and hammer work, and the artist is in control of every single stroke. AI tools are big black boxes. We don't understand how they work, nobody does. It may very well be that they are just big plagiarization machines, stealing from their training data, which usually was collected without consent. And their output, if unfiltered, is still pretty bad and bland. Everyone knows about the difficulties of image generators to produce human hands with the correct number of fingers. Microsoft is defaulting to AI translations of their help pages to German right now and... well, the tool does not understand that in tech, you simply do not translate some terms.
Of course, there is nuance. What I'm not interested in is an AI image from the prompt "Cartoon style, panda sitting on a living room couch reading a book". But a longer, more elaborate prompt? Maybe there can be art in there. But then again, why don't I just read that prompt?
For technical stuff, sure, use all the AI you can get away with. That stuff is pretty soulless to begin with, and if we're being honest, before AI, we would google our question, land on a stack overflow page where someone asked exactly that, and copy the answer from there. If we were being transparent, we'd link to the source. The AI tools are now smart enough to combine two things, you can ask "how do I convert a PNG to GIF in CMake?" and chances are, you'll get a usable answer, whereas before, you would have to decompose the problem first.
I'm personally using Perplexity from time to time. It will occasionally produce hallucinations, give advice that just does not work, or maybe used to work in older versions of whatever tool you ask about, even though you explicitly gave the version you need it for in the question. But it will often correctly say what you're trying to do is not possible, or at least not possible that way, and give workarounds. And it will cite its sources, so you can read the originals and cite them accordingly.
As I understand it, it will first use a language model to transform your question into a usable search query, execute that, then summarize the results with another language model.
Yeah, sorry, I'm not big on the tech behind it all. I know it's all basically just giant matrix on vector multiplications with discretization in between.
We also have the Moderator Control Panel here

Re: AI: thoughts, experiences, predictions
Some dude used 6 AIs to make a small video game (youtube)
At this point, AI is an interesting gimmick and I sometimes use it for repetetive tasks and searches it can easily grasp, but it's never 100% reliable as a research tool or as a creative aid. I'm glad the surrealists in particular didn't live to see this. I have used Kling AI to animate photos of my long dead relatives wearing Nazi uniforms in the 1940s, and it looked funny and scary at the same time, it more often than not completely botched the faces but also aped life close enough so that it'd fool someone who didn't see other photos of them. I agree with Z-Man about the soullessness inherent in most AI art but I remain optimistic that it can contribute to some great artworks and a better standard of living/working if it is used the right way by the right people.
What I find most concerning about AI is what the excessive need for more electricity and computing power to generate crappy nonsense images will do to the climate. It might help mitigate these problems but maybe not, I don't know enough about that either. All these companies advertise their language models like they invented the wheel and they're democratizing access to the tree of knowledge in the Garden of Eden and at the same time donate to Trump.
As for archeology, it might have some promising applications, but since we usually don't have a lot of funding (unlike medicine or physics), all this is my personal idea of what the future might look like, in an ideal world:
- we will have a tool at some point where you can put in lots of pottery shards or other objects, say, metal finds (not just Roman coins but also simple shoe nails and horse's harness fragments), and that tool will be trained on photos/physical/chemical properties/3D models/weight and compare these to a huge file set it is trained on so it quickly recognizes and dates what you have put in, and maybe also do a complete 3D-scan and georeferencing the object in the process, while calculating statistics about an entire archaeological site (think "60 % Roman import pottery, 40 % Germanic pottery" or metal finds with "40 % military use, 30% military and/or civilian, 10 % civilian, 30 % unidentifiable" from around 20 AD, then different finds from 100 years later, meaning the settlement was abandoned and later re-used for some reason, fragmentation/weight averages etc. There was an app project for the pottery part but that project failed miserably.
- it might help us differentiate between stratigraphical layers faster and make decisions what archaeological/soil layer to document/remove next. AI tools in the early stages for this already exist to some degree and this is especially useful for geologists. Random image from the internet below, to show that sometimes the layers resemble each other quite closely and it's really a deliberate decision by the lead archaeologist where to draw the lines, based on the archaeological context, like soil properties, features and finds in a specific layer.

- simple tasks that are part of our daily routine: measuring a feature, taking a photo, fixing errors in photos. Blurred photos because of camera shaking might be a thing of the past soon. We often paint the outline of a feature in spray paint so the guy with the total station can easily measure it. You used to need two people for measuring something. There are already robotic total stations that require just one guy holding a receiver to the point you want to measure and the total station will automatically set itself up, perfectly leveled, aim for the thing you are holding in your hand and save the GPS data. AI will further simplify that process I imagine, but I'll likely be dead by the time it does that. Again, random picture below. Imagine a total station whom you can tell "measure all the white lines". Now imagine a robot total station that measures this stuff as you draw the white lines. Now imagine a pen or a phone that measures this stuff with perfect GPS coordinates, preferably also makes a 3D scan/photogrammetry of your archeology trench, without even having the need of a heavy expensive total station.

Fixing errors in photos: That's something I want! Let's say you want to put this photo below in your publication but you got the year and the numbering on the black photo board wrong because someone forgot to change it before the photo was taken. This happens a lot and I'm usually the one who is asked to fix it in GIMP. Again, this is something that can be useful for us as well as geologists, criminologists and so on, but so far, I didn't find a software that can do this.

At this point, AI is an interesting gimmick and I sometimes use it for repetetive tasks and searches it can easily grasp, but it's never 100% reliable as a research tool or as a creative aid. I'm glad the surrealists in particular didn't live to see this. I have used Kling AI to animate photos of my long dead relatives wearing Nazi uniforms in the 1940s, and it looked funny and scary at the same time, it more often than not completely botched the faces but also aped life close enough so that it'd fool someone who didn't see other photos of them. I agree with Z-Man about the soullessness inherent in most AI art but I remain optimistic that it can contribute to some great artworks and a better standard of living/working if it is used the right way by the right people.
What I find most concerning about AI is what the excessive need for more electricity and computing power to generate crappy nonsense images will do to the climate. It might help mitigate these problems but maybe not, I don't know enough about that either. All these companies advertise their language models like they invented the wheel and they're democratizing access to the tree of knowledge in the Garden of Eden and at the same time donate to Trump.
As for archeology, it might have some promising applications, but since we usually don't have a lot of funding (unlike medicine or physics), all this is my personal idea of what the future might look like, in an ideal world:
- we will have a tool at some point where you can put in lots of pottery shards or other objects, say, metal finds (not just Roman coins but also simple shoe nails and horse's harness fragments), and that tool will be trained on photos/physical/chemical properties/3D models/weight and compare these to a huge file set it is trained on so it quickly recognizes and dates what you have put in, and maybe also do a complete 3D-scan and georeferencing the object in the process, while calculating statistics about an entire archaeological site (think "60 % Roman import pottery, 40 % Germanic pottery" or metal finds with "40 % military use, 30% military and/or civilian, 10 % civilian, 30 % unidentifiable" from around 20 AD, then different finds from 100 years later, meaning the settlement was abandoned and later re-used for some reason, fragmentation/weight averages etc. There was an app project for the pottery part but that project failed miserably.
- it might help us differentiate between stratigraphical layers faster and make decisions what archaeological/soil layer to document/remove next. AI tools in the early stages for this already exist to some degree and this is especially useful for geologists. Random image from the internet below, to show that sometimes the layers resemble each other quite closely and it's really a deliberate decision by the lead archaeologist where to draw the lines, based on the archaeological context, like soil properties, features and finds in a specific layer.
- simple tasks that are part of our daily routine: measuring a feature, taking a photo, fixing errors in photos. Blurred photos because of camera shaking might be a thing of the past soon. We often paint the outline of a feature in spray paint so the guy with the total station can easily measure it. You used to need two people for measuring something. There are already robotic total stations that require just one guy holding a receiver to the point you want to measure and the total station will automatically set itself up, perfectly leveled, aim for the thing you are holding in your hand and save the GPS data. AI will further simplify that process I imagine, but I'll likely be dead by the time it does that. Again, random picture below. Imagine a total station whom you can tell "measure all the white lines". Now imagine a robot total station that measures this stuff as you draw the white lines. Now imagine a pen or a phone that measures this stuff with perfect GPS coordinates, preferably also makes a 3D scan/photogrammetry of your archeology trench, without even having the need of a heavy expensive total station.

Fixing errors in photos: That's something I want! Let's say you want to put this photo below in your publication but you got the year and the numbering on the black photo board wrong because someone forgot to change it before the photo was taken. This happens a lot and I'm usually the one who is asked to fix it in GIMP. Again, this is something that can be useful for us as well as geologists, criminologists and so on, but so far, I didn't find a software that can do this.

Last edited by Word on Sun May 25, 2025 6:07 pm, edited 4 times in total.
Re: AI: thoughts, experiences, predictions
- reconstrucing stuff: if you put an old b/w image into ChatGPT an ask it to convert it to color and 4K, it won't exactly do that yet. It will instead dream up something that'll look quite close but you'll quickly notice weird differences. See attachments.
(edit: the attachment comments got swapped. first image shows the tram, the b/w image is the original one and the colorized one the ChatGPT reconstruction!)
(edit: the attachment comments got swapped. first image shows the tram, the b/w image is the original one and the colorized one the ChatGPT reconstruction!)
- kyle
- Reverse Outside Corner Grinder
- Posts: 1963
- Joined: Thu Jun 08, 2006 3:33 pm
- Location: Indiana, USA, Earth, Milky Way Galaxy, Universe, Multiverse
- Contact:
Re: AI: thoughts, experiences, predictions
I saw a post on X the other day, where someone quoted another post, Something to this effect
but at the same point, Tesla FSD literally drives me around everywhere. the reasons I take over or intervene are no longer safety related, but just don't be a jerk related, or a lot of the times It's because I'd like to travel faster. I never realized how awful people are at keeping speed on the highway, nor how many hills there are that slow ICE cars down. Tesla now have driving modes where they work differently than the past. In the past you would set a speed and it would just go that speed and pass people if needed, now it's a little more complicated and will slow down, and not always get over to pass if it can. With all that said and 2k miles of using it on my new Tesla, I'm 100% sure in Austin, next week Tesla will be operating a small fleet of fully self driving cars to pick up people, without a safety driver, using pure vision neural nets.
As for programming, AI(grok, copilot) Work as my junior engineer, it can typically do those tasks OK, maybe with a few issues, that it can eventually address. When I give it the tough problems, it can help me understand them much more, but it cannot actually solve them.
Another cool thing AI helped me with was an A/c issue, my A/C stopped working, It gave me ways I could check if my system was working or not, even in cooler temps where I did not need it. Ultimately after having fixed the issues I could personally fix, the real issue was with the refrigerant level, there is a small leak, so I had to call the AC people out to fix it. I was able to see the level the lines showed to then put back into AI to make sure they were being honest about it also.
For art and all that I love the inperfections that the humans leave, far better than AI.
I have more to say on this, but that's enough for me for now
Nice little jab at how poor Microsoft products have become, but also a little of what Z-man mentioned, the AI is a black box in a way, kind of how our mind are also. If you blindly accept the code it gives you, it could be wrong. and it's even more frustrating if you are doing the right thing and understanding the code and fixing it before using it, but you rely on a framework where they don't do this.We know we all have used MS teamsMicrosoft code is being built 30% by AI
but at the same point, Tesla FSD literally drives me around everywhere. the reasons I take over or intervene are no longer safety related, but just don't be a jerk related, or a lot of the times It's because I'd like to travel faster. I never realized how awful people are at keeping speed on the highway, nor how many hills there are that slow ICE cars down. Tesla now have driving modes where they work differently than the past. In the past you would set a speed and it would just go that speed and pass people if needed, now it's a little more complicated and will slow down, and not always get over to pass if it can. With all that said and 2k miles of using it on my new Tesla, I'm 100% sure in Austin, next week Tesla will be operating a small fleet of fully self driving cars to pick up people, without a safety driver, using pure vision neural nets.
As for programming, AI(grok, copilot) Work as my junior engineer, it can typically do those tasks OK, maybe with a few issues, that it can eventually address. When I give it the tough problems, it can help me understand them much more, but it cannot actually solve them.
Another cool thing AI helped me with was an A/c issue, my A/C stopped working, It gave me ways I could check if my system was working or not, even in cooler temps where I did not need it. Ultimately after having fixed the issues I could personally fix, the real issue was with the refrigerant level, there is a small leak, so I had to call the AC people out to fix it. I was able to see the level the lines showed to then put back into AI to make sure they were being honest about it also.
For art and all that I love the inperfections that the humans leave, far better than AI.
I have more to say on this, but that's enough for me for now

- kyle
- Reverse Outside Corner Grinder
- Posts: 1963
- Joined: Thu Jun 08, 2006 3:33 pm
- Location: Indiana, USA, Earth, Milky Way Galaxy, Universe, Multiverse
- Contact:
Re: AI: thoughts, experiences, predictions
Re: AI: thoughts, experiences, predictions
True, I don't know how people use Windows these days, it's a real mess. However, Microsoft seems really well positioned to be a leader in AI.* One of the books I read this year is AI Valley which is kind of an insider's look at the top people in the US artificial intelligence race. Microsoft has been a key player to the success of OpenAI and Azure is handling a lot of compute. On top of that, Microsoft's business ecosystem is one of the most likely avenues for new AI tools to make a huge impact, so it will be interesting to see how this plays out.
* AI is really too generic a term to describe these tools and I think we need more specific, descriptive terms for the different families. Your car's autopilot is not the same as a LLM, and once any AI becomes ubiquitous they stop being referred to as AI. I've often turned to the Office of Economic Cooperation and Development for guidance on AI, and they classify AI based on impact, but not how we should refer to types colloquially (though multi-modal models make this even harder). Taxonomy in emerging fields is hard.
Re: AI: thoughts, experiences, predictions
I'm going to chime in the Terminator aspects. We are not currently building SkyNet, sorry.
When we're talking about neural networks, which is usually what we mean by AI, what's behind them is pretty, um, interesting.
They're based on an extremely basic and outdated model of how the human brain works. I don't know that updating them would help with the problem I'm going to discuss, because it doesn't matter a lot. Basically, a single node is supposed to be comparable to a neuron on the brain. A single node will take all of its inputs, do some math with them, and then output a single number from 0 to 1 to be used as input in the next layer. That math? It's just multiplying some constant times the value that came out of another node, adding all of them up, then adding another arbitrary constant, then running it through a function that turns the result into a number from 0 to 1, and outputting that. The training involved is where they tweak all the constants in the various nodes. THis is also where the common refrain "We don't know how AI works!" comes from. It's true that we don't know how a hand-drawn L gets identified as an L, because when we look at the internal state of the trained model, we can't make sense of what we're looking at intuitively. But we do know how AI works, we built it!
Anyway, the obvious problem here is that the information a single node can work with is dwarfed compared to an actual neuron in the brain. We don't have enough neurons to encode all of our memories, skills, behaviors, etc. on top of all the subprocessing that happens in the brain (facial recognition, sound recognition, language understanding, visual processing, etc) to do all that encoding where each neuron is capable of only providing a single value. Actual biological neurons, on an individual level, are still complex enough that we don't really understand how the encoding happens, we just know that individual neurons can be involved in hundreds of completely different tasks from retrieving memories to recognizing Homer Simpson whenever we see him.
Then I look at the size of the computers that are being used to power Google, ChatGPT, etc. These things are HUGE. A computer running a language model, for example, can take up an entire datacenter of reasonable size. How much space does the language center of your brain take up? Like a few milliliters? And we can still, for the most part, identify AI speech vs human speech (although I understand that AI is getting a lot better at it).
Using current models, I don't know if there are enough atoms in the universe to build the computer that can become sentient on a level that can actually threaten mankind a la SkyNet. It's just not physically possible with what we have, and I don't think it will be physically possible in our lifetimes. Future models? No, I don't think "anything's possible". Not even quantum computers are going to change this significantly, because that's not even where they're strong, assuming we ever build a functional one in the first place. I honestly think that only by figuring out how to build a computer out of actual neurons are we going to be able to create sentient AI, and then it'll be biological in nature and therefore not immediately aligned against other biological life forms.
I said "on a level that can actually threaten us" as a qualifier largely because the word "sentience" itself is a tricky beast. It's so tricky that we don't actually have a boundary we can draw where we say "below this line, you are not sentient, you are a MAGA nut" and "above this line, you are capable of thought". So I understand that we're not really going to be able to tell if a computer has started really thinking or not, and it's not because we don't understand how AI works, it's because we don't know what the criteria is for determining if that's happening.
I'm not saying AI isn't dangerous, I'm just saying we're not going to be going to war against the machines anytime soon, if at all, ever.
When we're talking about neural networks, which is usually what we mean by AI, what's behind them is pretty, um, interesting.
They're based on an extremely basic and outdated model of how the human brain works. I don't know that updating them would help with the problem I'm going to discuss, because it doesn't matter a lot. Basically, a single node is supposed to be comparable to a neuron on the brain. A single node will take all of its inputs, do some math with them, and then output a single number from 0 to 1 to be used as input in the next layer. That math? It's just multiplying some constant times the value that came out of another node, adding all of them up, then adding another arbitrary constant, then running it through a function that turns the result into a number from 0 to 1, and outputting that. The training involved is where they tweak all the constants in the various nodes. THis is also where the common refrain "We don't know how AI works!" comes from. It's true that we don't know how a hand-drawn L gets identified as an L, because when we look at the internal state of the trained model, we can't make sense of what we're looking at intuitively. But we do know how AI works, we built it!
Anyway, the obvious problem here is that the information a single node can work with is dwarfed compared to an actual neuron in the brain. We don't have enough neurons to encode all of our memories, skills, behaviors, etc. on top of all the subprocessing that happens in the brain (facial recognition, sound recognition, language understanding, visual processing, etc) to do all that encoding where each neuron is capable of only providing a single value. Actual biological neurons, on an individual level, are still complex enough that we don't really understand how the encoding happens, we just know that individual neurons can be involved in hundreds of completely different tasks from retrieving memories to recognizing Homer Simpson whenever we see him.
Then I look at the size of the computers that are being used to power Google, ChatGPT, etc. These things are HUGE. A computer running a language model, for example, can take up an entire datacenter of reasonable size. How much space does the language center of your brain take up? Like a few milliliters? And we can still, for the most part, identify AI speech vs human speech (although I understand that AI is getting a lot better at it).
Using current models, I don't know if there are enough atoms in the universe to build the computer that can become sentient on a level that can actually threaten mankind a la SkyNet. It's just not physically possible with what we have, and I don't think it will be physically possible in our lifetimes. Future models? No, I don't think "anything's possible". Not even quantum computers are going to change this significantly, because that's not even where they're strong, assuming we ever build a functional one in the first place. I honestly think that only by figuring out how to build a computer out of actual neurons are we going to be able to create sentient AI, and then it'll be biological in nature and therefore not immediately aligned against other biological life forms.
I said "on a level that can actually threaten us" as a qualifier largely because the word "sentience" itself is a tricky beast. It's so tricky that we don't actually have a boundary we can draw where we say "below this line, you are not sentient, you are a MAGA nut" and "above this line, you are capable of thought". So I understand that we're not really going to be able to tell if a computer has started really thinking or not, and it's not because we don't understand how AI works, it's because we don't know what the criteria is for determining if that's happening.
I'm not saying AI isn't dangerous, I'm just saying we're not going to be going to war against the machines anytime soon, if at all, ever.
Check out my YouTube channel: https://youtube.com/@davefancella?si=H--oCK3k_dQ1laDN
Be the devil's own, Lucifer's my name.
- Iron Maiden
Be the devil's own, Lucifer's my name.
- Iron Maiden
- kyle
- Reverse Outside Corner Grinder
- Posts: 1963
- Joined: Thu Jun 08, 2006 3:33 pm
- Location: Indiana, USA, Earth, Milky Way Galaxy, Universe, Multiverse
- Contact:
Re: AI: thoughts, experiences, predictions
This is pretty close to how things started, with the first image recognition AI (can't think of the name of it right now) But it's evolved quite a lot from there, and I've not followed all the details of it, but this is the type model that has been giving much more success yes somewhat similar in concept but even more of a black hole to me as I've not kept up on it enough, https://en.wikipedia.org/wiki/Transform ... hitecture)Lucifer wrote: ↑Sat May 31, 2025 10:50 pm They're based on an extremely basic and outdated model of how the human brain works. I don't know that updating them would help with the problem I'm going to discuss, because it doesn't matter a lot. Basically, a single node is supposed to be comparable to a neuron on the brain. A single node will take all of its inputs, do some math with them, and then output a single number from 0 to 1 to be used as input in the next layer. That math? It's just multiplying some constant times the value that came out of another node, adding all of them up, then adding another arbitrary constant, then running it through a function that turns the result into a number from 0 to 1, and outputting that. The training involved is where they tweak all the constants in the various nodes. THis is also where the common refrain "We don't know how AI works!" comes from. It's true that we don't know how a hand-drawn L gets identified as an L, because when we look at the internal state of the trained model, we can't make sense of what we're looking at intuitively. But we do know how AI works, we built it!
This is a misconception, while training for a model takes up a ton of compute and power, xAI is building out their next training computer to use roughly 1.21 gigawatts of power, the inference compute to run the model uses much less. The Tesla FSD computer likely has billions of parameters, but uses only around 100 watts or power, Nothing really more powerful the a high end desktop computer, 20 core 2.2 Ghz CPU with 50 TOPS, specifically designed for inference from camera and some sound data.Lucifer wrote: ↑Sat May 31, 2025 10:50 pm Then I look at the size of the computers that are being used to power Google, ChatGPT, etc. These things are HUGE. A computer running a language model, for example, can take up an entire datacenter of reasonable size. How much space does the language center of your brain take up? Like a few milliliters? And we can still, for the most part, identify AI speech vs human speech (although I understand that AI is getting a lot better at it).
While I don't know a whole lot about Training it, is more of a way to compress the data, I suspect, the reason for the supercomputer for training, is so that almost all real world data can be fed into at once so that it can be compressed.
Then a few thoughts about the brain, while we don't know in details what every exact neuron is for, we are able to implant probes into humans and be able to understand them with AI, If you look at Neuralink with just a few threads placed into the brain, it's able to pick up actions the person wants to do, like controlling a mouse, and soon a wheelchair. this with some calibration can pick up on signals, from those threads and using AI can make a model that works for the person based on it. Next year they are planning to go the other way, and send signals into the visual cortex, in order to allow for blind people the ability to see.
Hopefully, I'll add, that Tesla is already doing some pretty cool things with their humanoid robot, and it's already performing jobs in the factory. and they are training it to do much more https://x.com/Tesla_Optimus/status/1925047336256078302 that's the latest, They already have a walking model that trained, without any video input, it simply reacts to how it steps and it's pretty cool too https://x.com/Tesla_Optimus/status/1866171391156113740
(I follow a lot of the Tesla AI engineers, that have made the claims on how it's trained, even though it may not be stated in those)
Edit: One other thing I was planning to Post, ChatGPT is not trained for image editing, Grok is, that's the difference in them, I actually ran it with a better prompt and got a few better results, one did alter it a little bit, Also I was curious if it could tell me waht it was, here how how Grok responded
This image depicts a historical scene with a tram on a tree-lined street, flanked by ornate buildings. The architectural style and tram design suggest it’s from an early 20th-century European city. Based on the building designs, which resemble neo-Renaissance or neo-Baroque styles common in Germany, and the tram system, this is likely a street in Berlin, possibly Unter den Linden or a similar boulevard. This area was known for its grand architecture and tram lines in the early 1900s.

Re: AI: thoughts, experiences, predictions
Here's a video about how AI introduces many people to a fake version of history. I don't particularly like that channel but the guy is correct. Some more examples: There's this guy inserting himself into "selfies" with historical figures and then there's countless sites like AIHistoryPOV or this video of Marilyn Monroe. Yes, these videos can be highly entertaining and innocent fun, but they're also also distorting things, leaving out things, adding and mixing things, de- and recontextualizing people, places and things so that there's not much truth left. Become an Auschwitz inmate! Experience the Hiroshima bombing! Why read about that stuff when you can be there?
Re: AI: thoughts, experiences, predictions
Right. I'm currently reading a book by Jerry Kaplan and he states that for AI to be like Skynet, first it would have to be hooked up to countless disparate systems and there is basically no chance of that ever happening. And while we can debate about what is sentient and what is alive, we'll eventually have to make some practical decisions about how agents get treated. One of the current goals of AI tech is to solve long-term memory problems and how to wire in more types of unique data -- all in order to make agents both more useful and "life-like". Without a doubt when we have personal agents people will not only develop extreme attachment to them they may take out something that's the equivalent of a life insurance policy for their agent. Alive or not, they will practically be people.
This is correct. I know specifically with image diffusion models the first pass is not only a downscaled image but usually 1-bit color for extreme edge detection. In the next passes the image data are less compressed and eventually have color information, then near the end it's basically the full color image with varying degrees of noise. It's how the model learns to identify features, combined with the adversarial model that oversees the training. Or at least that's how I've seen it explained. Super interesting!
Any of these engineers have a substack I can add to my feed reader? (I don't have twitter.) I've been trying to track people who seem to have their finger on the pulse, though most of the people I follow are policy experts and not engineers -- so having a few in my feed would help round out my understanding. Cheers.
- kyle
- Reverse Outside Corner Grinder
- Posts: 1963
- Joined: Thu Jun 08, 2006 3:33 pm
- Location: Indiana, USA, Earth, Milky Way Galaxy, Universe, Multiverse
- Contact:
Re: AI: thoughts, experiences, predictions
Probably not, they are hardly on Xsinewav wrote: ↑Sun Jun 01, 2025 9:12 pm Any of these engineers have a substack I can add to my feed reader? (I don't have twitter.) I've been trying to track people who seem to have their finger on the pulse, though most of the people I follow are policy experts and not engineers -- so having a few in my feed would help round out my understanding. Cheers.
Andrej Karpathy is a great add, not sure if he's on substack, but he's on youtube https://www.youtube.com/andrejkarpathy

Re: AI: thoughts, experiences, predictions
Something I'm excited about that you have probably heard of is the Vesuvius challenge, which might help us read lots of previously unknown ancient texts. The challenge focuses on 5 papyri scrolls but the villa they were stored in isn't even fully excavated yet, and it already contained hundreds of other such scrolls with texts that haven't been read since antiquity.
(This may be of interest to the mathematicians among you since they actually award huge sums of money if you come up with a new solution. Elon Musk is a donor but in this particular case the cause is fine.)
(This may be of interest to the mathematicians among you since they actually award huge sums of money if you come up with a new solution. Elon Musk is a donor but in this particular case the cause is fine.)
- kyle
- Reverse Outside Corner Grinder
- Posts: 1963
- Joined: Thu Jun 08, 2006 3:33 pm
- Location: Indiana, USA, Earth, Milky Way Galaxy, Universe, Multiverse
- Contact:
Re: AI: thoughts, experiences, predictions
Edit: decided to put in a separate thread, don't want it bogged down with what I posted
