Only Darren
1 min readApr 17, 2024

--

I'd suggest my lack of understanding has more to do with my computational capacity than your educational one, so don't stress, your articles do fine. I just reread them till enough of my own neurons hook up for me to make use of the information.

I know they're doing this stuff to make the models more accurate and reduce the compute, so are they only changing one thing at a time?

Like the accuracy of the weightings. I read that models trained as 1.58 (-1,0,1) as their weightings option, rather than floating point numbers, have demonstrated comparable levels of accuracy but with much less compute.

So are the various AI mobs using more than one approach at a time to get to the accuracy and compute levels they are targeting, or is the AI space still heavily research oriented?

Seems to me there's an arms race for AGI, (which more training data won't accomplish IMO) so I'm wondering what the goals are out there in AI dev land.

--

--

Only Darren

Life has so many questions. So many issues. So much potential. I occasionally have thoughts that might help you. I hope I can. Peace Out humans.