Microsoft lays the limitations of ChatGPT and friends bare
R&A IT Strategy & Architecture
by gctwnl
6d ago
Microsoft researchers published a very informative paper on their pretty smart way to let GenAI do 'bad' things (i.e. 'jailbreaking'). They actually set two aspects of the fundamental operation of these models against each other ..read more
Visit website
Don’t forget all the things that a core team performs to a tee, but that you never see
R&A IT Strategy & Architecture
by gctwnl
1w ago
The third 'fragmentation wave' of the IT-revolution is upon us, it seems. Fragmentation is a repeated pattern in the IT-revolution, that has given us object oriented programming and agile/DevOps as solutions to managing complexity. Now, it is the organisation's turn to fragment. How strong is your mission, your 'why'? You might soon find out, thanks to IT ..read more
Visit website
Ain’t No Lie — The unsolvable(?) prejudice problem in ChatGPT and friends
R&A IT Strategy & Architecture
by gctwnl
1M ago
Thanks to Gary Marcus, I found out about this research paper. And boy, is this is both a clear illustration of a fundamental flaw at the heart of Generative AI, as well as uncovering a doubly problematic and potentially unsolvable problem: fine-tuning of LLMs may often only hide harmful behaviour, not remove it ..read more
Visit website
Will Sam Altman’s $7 Trillion Plan Rescue AI?
R&A IT Strategy & Architecture
by gctwnl
2M ago
Sam Altman wants $7 trillion for AI chip manufacturing. Some call it an audacious 'moonshot'. Grady Booch has remarked that such scaling requirements show that your architecture is wrong. Can we already say something about how large we have to scale current approaches to get to computers as intelligent as humans — as Sam intends? Yes we can ..read more
Visit website
The Department of “Engineering The Hell Out Of AI”
R&A IT Strategy & Architecture
by gctwnl
2M ago
ChatGPT has acquired the functionality of recognising an arithmetic question and reacting to it with on-the-fly creating python code, executing it, and using it to generate the response. Gemini's contains an interesting trick Google plays to improve benchmark results. These (inspired) engineering tricks lead to an interesting conclusion about the state of LLMs ..read more
Visit website
Memorisation: the deep problem of Midjourney, ChatGPT, and friends
R&A IT Strategy & Architecture
by gctwnl
4M ago
If we ask GPT to get us "that poem that compares the loved one to a summer's day" we want it to produce the actual Shakespeare Sonnet 18, not some confabulation. And it does. It has memorised this part of the training data. This is both sought-after and problematic and provides a fundamental limit for the reliability of these models ..read more
Visit website
What makes Ilya Sutskever believe that superhuman AI is a natural extension of Large Language Models?
R&A IT Strategy & Architecture
by gctwnl
4M ago
I came across a 2 minute video where Ilya Sutskever — OpenAI's chief scientist — explains why he thinks current 'token-prediction' large language models will be able to become superhuman intelligences. How? Just ask them to act like one ..read more
Visit website
Artificial General Intelligence is Nigh! Rejoice! Be very afraid!
R&A IT Strategy & Architecture
by gctwnl
5M ago
Should we be hopeful or scared about imminent machines that are as intelligent or more than humans? Surprisingly, this debate is even older than computers, and from the mathematician Ada Lovelace comes an interesting observation that is as valid now as it was when she made it in 1842 ..read more
Visit website
GPT and Friends bamboozle us big time
R&A IT Strategy & Architecture
by gctwnl
5M ago
After watching my talk that explains GPT in a non-technical way, someone asked GPT to write critically about its own lack of understanding. The result is illustrative, and useful. "Seeing is believing", true, but "believing is seeing" as well ..read more
Visit website
The hidden meaning of the errors of ChatGPT (and friends)
R&A IT Strategy & Architecture
by gctwnl
6M ago
We should stop labelling the wrong results of ChatGPT and friends (the 'hallucinations') as 'errors'. Even Sam Altman — CEO of OpenAI — agrees, they are more 'features' than 'bugs' he has said. But why is that? And why should we not call them errors ..read more
Visit website

Follow R&A IT Strategy & Architecture on FeedSpot

Continue with Google
Continue with Apple
OR