Despite the growing awareness of the productivity gains that are possible from automating everyday business processes, the great majority — as much as 68% in recent research — are still manual.
In this free webinar from GigaOm Research, Gigaom analyst Stowe Boyd digs into the barriers that are holding back adoption of process automation in a give-and-take conversation with Terry Simpson, Technical Evangelist at Nintex.
The pair will address the preconception that workflow requires programming, and is therefore costly and time-consuming, and inaccessible to non-techies. Another concern is that integration with existing tools is difficult, and adapting to the specific work patterns in your workplace may not be possible. We’ll dispel those concerns. Perhaps the biggest barrier is the perception that you have to attack all the processes at once, and only critical corporate processes should be automated.
In this 1-hour webinar, you will discover:
Why you don’t have to attack all processes at once.
How to take a stepwise approach to knock down barriers.
How to regain time lost in the labyrinth of manual processes.
Register now to join GigaOm and Nintex for this free expert webinar.
This free 1-hour webinar explores how to take advantage of data management to improve value of data and how to reuse it efficiently across the organization, and features data storage GigaOm analyst Enrico Signoretti and special guest from Aparavi, Jonathan Calmes. Every organization is creating and storing vast amounts of data, most of which has a short lifespan but needs to be conserved for a long time. Compliance, regulations, never-delete policies and so on, lead us to store and conserve data for longer and longer. Even cloud is not a solution to overcome the increasing infrastructure costs at this point, but what happens if we can create more value out of it?
Data stored in our systems can be considered a liability or become the most powerful asset for the organization, and any digital transformation initiative can benefit from it.
This 1-hour webinar explores:
Understanding data growth and diversity
Transforming a liability into an asset with classification and search
How to build efficient intelligent archives for data reusability
Applications that can benefit from data reusability
Sign up for this webinar to learn how the right data management practices and intelligent archives can boost the value of data and make it reusable for new applications, becoming a tool to improve and simplify the digital transformation journey of your organization.
Register now to join GigaOm and Aparavi for this free expert webinar.
Pekin Insurance is one of the nation’s most successful insurance providers, with combined assets of $2 billion, more than 800 employees, 1,500 agencies, and 8,500 independent agents. Pekin Insurance is on the fast path to a full overhaul and modernization of their data, from the platform, to quality, to governance, to enabling consumers. They have built a 3-year strategy focusing on Data & Analytics and are wrapping up the final year, focused on a robust data layer with a data lake and a data warehouse, on target, on budget, and within scope.
The future for insurance is customer-centric and data is the key to meeting customer experience objectives.
SAFe Agile project management works well for data projects.
Agile requires patience and time for it to gel.
Do not underestimate the time and cost (nor the long term value) of dealing with data quality and complexity.
Data Lakes provide tremendous value as both data warehouse staging and data science work areas.
Strong data governance is essential for getting the business’s input into data projects.
A data maturity path, such as the one provided by McKnight Consulting Group, is instructive in generating the next steps.
Special thanks to the major contributions to this report, and the true champions of the success of the project, Kim Wienzierl, Assistant Vice President/IT, Pekin Insurance; Toby Cihla, Divisional IT Director, Data, Pekin Insurance; and their teams.
By now, the story is well told on how artificial intelligence is changing businesses, all the way down to impacting core business models. In 2017, Amazon bought Whole Foods then later opened their Amazon Go automated store. Since then, they’ve been using AI to understand and improve the physical retail shopping experience. In 2018, Keller Williams announced a pivot towards becoming an artificial intelligence-driven technology company to compete with the tech-centric entrants into the market like Zillow and Redfin.
These companies are not alone. According to a study by MIT Sloan Management Review, one trillion dollars of new profit will be created from the use of artificial intelligence technologies by 2030. That is roughly 10% of all total profits projected for that time. Still, most companies have yet to implement artificial intelligence in their business. Depending on what study you ready, 70%-80% of all businesses have yet to begin any AI implementation whatsoever.
The reality is most companies are just not ready for AI, and if they try before they are ready, they will fail. For AI projects to be successful, you likely need to shore up a few areas.
6 Pillars of AI Readiness:
Regardless of the level of expertise in your company or ability to invest, achieving meaningful results from artificial intelligence requires six key areas to be optimal tuned for success to be achieved. Even if a small portion of the profit forecasts are true, readiness matters.
Symptoms of Readiness
If AI is the answer, what is the problem? Many companies still struggle with a general understanding of how AI can make a meaningful impact. They don’t realize there are common challenges that plague most businesses where artificial intelligence can provide solutions. Identifying these challenges are signs that your company may be ready to see benefit from artificial intelligence technologies.
Symptoms of Readiness:
Mundane tasks prone to error
Not enough people to get jobs done
Need creative ways to get data
Desire to predict trends or make better decisions
Seeking new business models or to enter new markets
If any of these problems are relevant or a priority, AI has well documented benefits. Artificial Intelligence today is best suited to automate tasks, predict phenomenon, and even generate more data.
Leveraging AI for data generation gets less pickup. To level-set, data is the most important component of this whole equation (more on that later). We are also surrounded by data exhaust, where very little is captured and processed for meaningful intelligence. For example, computer vision and optical character recognition can be used to extract data from paper contracts or receipts to use to make future predictions.
Without a Culture of Innovation, You Are Not Ready For AI.
A company’s culture is paramount for embracing data and enhanced capabilities. Amazon, Keller Williams, Google, Facebook, and Walmart all have a track record of innovation. They have people and resources dedicated to the research and development of new ideas. Businesses must be fearless in courting innovation and not afraid to spend money for fear of failing in search of success. A willingness to embrace and invest in innovation is a must.
Along with innovation, organizations must see data as a corporate asset. The business and culture must value data and be invested in collecting it. In the future there will be far more cultural considerations that will dictate what and how AI will be adopted. Issues like privacy, explainability, and ethics will all be cultural considerations, dictating where the technology will and won’t be applied.
Without Sufficient Quantity and Quality Data, Artificial Intelligence Won’t Work.
By now it should be no surprise that data is the lifeblood of artificial intelligence. Without data, the algorithms cannot learn and make predictions. Data must be present in sufficient quantity AND quality. Mountains of data may not be enough if signal about the phenomenon you are looking to learn does not exist.
Many AI Pioneers already have robust data and analytics infrastructures along with a broad understanding of what it takes to develop the data for training AI algorithms. AI Investigators and Experimenters, by contrast, struggle because they have little analytics expertise and keep their data largely in silos, where it is difficult to integrate.
In fact, 90% of the effort to deploy AI solutions lies in wrangling data and feature engineering. The more high-quality data, the more accurate the predictions. Bad data is the number one reason most AI projects fail.
Without a strategy, AI solutions risk never making it into production.
As stated before, the business must value data as a corporate asset. That is fundamental to your strategy. However, the thinking must go further. Any AI program must be tightly aligned to support the corporate strategy. Artificial intelligence is an enhanced capability to achieve your business goals.
Companies committed to adopting AI need to make sure their strategies are transformational and should make AI central to revising their corporate strategies.
AI for AI’s sake often leads to long, drawn-out projects that never produce any real value. CEO support is the ideal. Executive sponsors are critical to ensure proper alignment, set business metrics for any technology implementation, and provide air cover against any disputes over data or technology involvement.
Getting buy-in from the top executives and the employees who will use the system, identifying clearly the business problem to be solved, and setting metrics to demonstrate the technology’s return on investment.
According to MIT Technology Review, there are key questions a business must answer to formulate their strategy:
What is the problem the business is trying to solve? Is this problem amenable to AI and machine learning?
How will the AI solve the problem? How has the business problem been reframed into a machine-learning problem? What data will be needed to input into the algorithms?
Where does the company source its data? How is the data labeled?
How often does the company validate, test and audit its algorithms for accuracy and bias?
Is AI, or machine learning, the best and only way to solve this problem? Do the benefits outweigh potential privacy violations and other negative impacts?
Without cloud-based technologies, many AI solutions can’t operate.
For most businesses, embracing cloud-based computing and storage technologies are critical for AI programs to produce effectively. Artificial intelligence models require tremendous compute power to process massive data sets. This requires businesses to have ready access to computer power on-demand.
Since 2012, the amount of computation used in the largest AI training runs has been increasing exponentially with a 3.5 month doubling time (by comparison, Moore’s Law had an 18 month doubling period).
If AI programs are to be successful, companies need to embrace cloud technologies. They need to be willing to adopt platforms that provision GPU clusters based on workloads required by deployed models. Cloud is important because owning the hardware can cost over a million dollars for a single cluster, according to OpenAI.
Without internal expertise, AI adoption is challenging.
To successfully move AI projects through the development life-cycles, from data to production, you need to have in-house technical expertise. At minimum, you need to have dedicated data managers who can help wrangle data to train models. It is important to have software engineers or DevOps leads who can help move trained models into production environments so non-technical stakeholders can easily run reports. These two roles can be augmented by services providers that build and train models. However, it is better if you have data scientists, analysts, and data engineers who can help strategize and execute projects as well.
Without well-defined technology processes, projects risk never making it into production.
Another common risk associated with AI projects is that trained models don’t transition into production. It is important to have a connection between the business strategy and the delivery of AI technology. It’s especially important to have a plan in place for how to access the data, train the model, and handoff the model to be deployed as a usable solution.
Develop an operating model that defines AI roles and responsibilities for the business, technology teams, external vendors, and existing BPM centers of excellence. Some firms will separate model development — i.e., selecting and developing the AI, data science, and algorithms — from implementation. But over time, tech management will own deployment and operational management.
Moving forward it will be important to have documented processes for post-deployment. Once multiple models are in production you need to monitor models for performance and have a documented process for re-training.
Take an AI readiness assessment.
Many companies are not fully equipped to realize the full benefits created by artificial intelligence. And it is hard to know if “good enough” is good enough. Start by evaluating how equipped your business may be to successfully deploy projects. There are many ways to assess your readiness and de-risk investment. Online AI readiness assessments will help you start to understand if your organization has the prerequisites to successfully execute initial projects. If you are not ready, there is a lot of opportunity at stake. The most valuable thing you can do is to start to get ready. If you have executive buy-in, partner with consultancies or hire an AI strategist who can help put the pieces in place.
Byron Reese: This is Voices in AI brought to you by GigaOm, and I’m Byron Reese. I couldn’t be more excited today. My guest is Douglas Lenat. He is the CEO of Cycorp of Austin, Texas where GigaOm is based, and he’s been a prominent researcher in AI for a long time. He’s been awarded the biannual IJCAI computer and thought award in 1976. He created the machine learning program AM. He worked on (symbolic, not statistical) machine learning with his AM and Eurisko programs, knowledge representation, cognitive economy, blackboard systems and what he dubbed in 1984 as “ontological engineering.”
He’s worked in military simulations, numerous projects for the government for intelligence, with scientific organizations. In 1980 he published a critique of conventional random mutation Darwinism. He authored a series of articles in The Journal of Artificial Intelligence exploring the nature of heuristic rules. But that’s not all: he was one of the original Fellows of the Triple AI. And he’s the only individual to observe on the scientific advisory board of both Apple and Microsoft. He is a Fellow of the Triple AI and the cognitive science society, one of the original founders of TTI/ Vanguard in 1991. And on and on and on… and he was named one of the WIRED 25. Welcome to the show!
Douglas Lenat: Thank you very much Byron, my pleasure.
I have been so looking forward to our chat and I would just love, I mean I always start off asking what artificial intelligence is and what intelligence is. And I would just like to kind of jump straight into it with you and ask you to explain, to bring my listeners up to speed with what you’re trying to do with the question of common sense and artificial intelligence.
I think that the main thing to say about intelligence is that it’s one of those things that you recognize it when you see it, or you recognize it in hindsight. So intelligence to me is not just knowing things, not just having information and knowledge but knowing when and how to apply it, and actually successfully applying it in those cases. And what that means is that it’s all well and good to store millions or billions of facts.
But intelligence really involves knowing the rules of thumb, the rules of good judgment, the rules of good guessing that we all almost take for granted in our everyday life in common sense, and that we may learn painfully and slowly in some field where we’ve studied and practiced professionally, like petroleum engineering or cardiothoracic surgery or something like that. And so common sense rules like: bigger things can’t fit into smaller things. And if you think about it, every time that we say anything or write anything to other people, we are constantly injecting into our sentences pronouns and ambiguous words and metaphors and so on. We expect the reader or the listener has that knowledge, has that intelligence, has that common sense to decode, to disambiguate what we’re saying.
So if I say something like “Fred couldn’t put the gift in the suitcase because it was too big,” I don’t mean the suitcase was too big, I must mean that the gift was too big. In fact if I had said “Fred can’t put the gift in the suitcase because it’s too small” then obviously it would be referring to the suitcase. And there are millions, actually tens of millions of very general principles about how the world works: like big things can’t fit into smaller things, that we all assume that everybody has and uses all the time. And it’s the absence of that layer of knowledge which has made artificial intelligence programs so brittle for the last 40 or 50 years.
My number one question I ask every [AI is a] Turing test sort of thing, [which] is: what’s bigger a nickel or the sun? And there’s never been one that’s been able to answer it. And that’s the problem you’re trying to solve.
Right. And I think that there’s really two sorts of phenomena going on here. One is understanding the question and knowing the sense in which you’re talking about ‘bigger.’ One in the sense of perception if you’re holding up a nickel in front of your eye and so on and the other of course, is objectively knowing that the sun is actually quite a bit larger than a typical nickel and so on.
And so one of the things that we have to bring to bear, in addition to everything I already said, are Grice’s rules of communicating between human beings where we have to assume that the person is asking us something which is meaningful. And so we have to decide what meaningful question would they really possibly be having in mind like if someone says “Do you know what time it is?” It’s fairly juvenile and jerky to say “yes” because obviously what they mean is: please tell me the time and so on. And so in the case of the nickel and the sun, you have to disambiguate whether the person is talking about a perceptual phenomenon or an actual unstated physical reality.
So I wrote an article that I put a lot of time and effort into and I really liked it. I ran it on GigaOm and it was 10 questions that Alexa and Google Home answered differently but objectively. They should have been identical, and in every one I kind of tried to dissect what went wrong.
And so I’m going to give you two of them and my guess is you’ll probably be able to intuit in both of them what the answer, what the problem was. The first one was: who designed the American flag? And they gave me different answers. One said “Betsy Ross,” and one said “Robert Heft,” so why do you think that happened?
All right so in some sense, both of them are doing what you might call an ‘animal level intelligence’ of not really understanding what you’re asking at all. But in fact doing the equivalent of (I won’t even call it natural language processing), let’s call it ‘string processing,’ looking at processed web pages, looking for the confluence, and preferably in the same order, of some of the words and phrases that were in your question and looking for essentially sentences of the form: X designed the U.S. flag or something.
And it’s really no different than if you ask, “How tall is the Eiffel Tower?” and you get two different answers: one based on answering from the one in Paris and one based on the one in Las Vegas. And so it’s all well and good to have that kind of superficial understanding of what it is you’re actually asking, as long as the person who’s interacting with the system realizes that the system isn’t really understanding them.
It’s sort of like your dog fetching a newspaper for you. It’s something which is you know wagging its tail and getting things to put in front of you, and then you as the person who has intelligence has to look at it and disambiguate what does this answer actually imply about what it thought the question was, as it were, or what question is it actually answering and so on.
But this is one of the problems that we experienced about 40 years ago in artificial intelligence in the in the 1970s. We built AI systems using what today would be very clearly a neural net technology. Maybe there’s been one small tweak in that field that’s worth mentioning involving additional hidden layers and convolution, and we built a AIs using symbolic reasoning that used logic much like our Cyc system does today.
And again the actual representation looks very similar to what it does today and there had to be a bunch of engineering breakthroughs along the way to make that happen. But essentially in the 1970s we built AIs that were powered by the same two sources of power you find today, but they were extremely brittle and they were brittle because they didn’t have common sense. They didn’t have that kind of knowledge that was necessary in order to understand the context in which things were said, in order to understand the full meaning of what was said. They were just superficially reasoning. They had the veneer of intelligence.
We might have a system which was the world’s expert at deciding what kind of meningitis a patient might be suffering from. But if you told it about your rusted out old car or you told it about someone who is dead, the system would blithely tell you what kind of meningitis they probably were suffering from because it simply didn’t understand things like inanimate objects don’t get human diseases and so on.
And so it was clear that somehow we had to pull the mattress out of the road in order to let traffic toward real AI proceed. Someone had to codify the tens of millions of general principles like non humans don’t get human diseases, and causes don’t happen before their effects, and large things don’t fit into smaller things, and so on, and that it was very important that somebody do this project.
We thought we were actually going to have a chance to do it with Alan Kay at the Atari research lab and he assembled a great team. I was a professor at Stanford in computer science at the time, so I was consulting on that, but that was about the time that Atari peaked and then essentially had financial troubles as did everyone in the video game industry at that time, and so that project splintered into several pieces. But that was the core of the idea that somehow someone needed to collect all this common sense and represent it and make it available to make our AIs less brittle.
And then an interesting thing happened: right at that point in time when I was beating my chest and saying ‘hey someone please do this,’ which was America was frightened to hear that the Japanese had announced something they called the ‘fifth generation computing effort.’ Japan basically threatened to do in computing hardware and software and AI what they had just finished doing in consumer electronics, and in the automotive industry: namely wresting leadership away from the West. And so America was very scared.
Congress passed something that’s how you can tell it was many decades ago. Congress quickly passed something, which was called the National Cooperative Research Act, which basically said ‘hey all you large American companies: normally if you colluded on R & D, we would prosecute you for antitrust violations, but for the next 10 years, we promise we won’t do that.’ And so around 1981 a few research consortia sprang up in the United States for the first time in computing and hardware and artificial intelligence and the first one of those was right here in Austin. It was called MCC, the Microelectronics and Computer Technology Corporation. Twenty five large American companies each contributed a small number of millions of dollars a year to fund high risk, high payoff, long term R & D projects, projects that might take 10 or 20 or 30 or 40 years to reach fruition, but which, if they succeeded, could help keep America competitive.
And Admiral Bob Inman who’s also an Austin resident, one of my favorite people, one of the smartest and nicest people I’ve ever met, was the head of MCC and he came and visited me at Stanford and said “Hey look Professor, you’re making all this noise about what somebody ought to do. You have six or seven graduate students. If you do that here if it’s going to take you a few thousand person years. That means it’s going to take you a few hundred years to do that project. If you move to the wilds of Austin, Texas and we put in ten times that effort, then you’ll just barely live to see the end of it a few decades from now.”
And that was a pretty convincing argument, and in some sense that is the summary of what I’ve been doing for the last 35 years here is taking time off from research to do an engineering project, a massive engineering project called Cycorp, which is collecting that information and representing it formally, putting it all in one place for the first time.
And the good news is since you’ve waited thirty five years to talk to me Byron, is that we’re nearing completion which is a very exciting phase to be in. And so most of our funding these days at Cycorp doesn’t come from the government anymore, doesn’t come from just a few companies anymore, it comes from a large number of very large companies that are actually putting our technology into practice, not just funding it for research reasons.
So that’s big news. So when you have it all, and to be clear, just to summarize all of that: you’ve spent the last 35 years working on a system of getting all of these rules of thumb like ‘big things can’t go in small things,’ and to list them all out every one of them (dark things are darker than light things). And then not just list them like in an Excel spreadsheet, but to learn how to express them all in ways that they can be programmatically used.
So what do you have in the end when you have all of that? Like when you turn it on, will it tell me which is bigger: a nickel or the sun?
Sure. And in fact most of the questions that you might ask that you might think of as any one ought to be able to answer this question, Cyc is actually able to do a pretty good job of. It doesn’t understand that unrestricted natural language, so sometimes we’ll have to encode the question in logic in a formal language, but the language is pretty big. In fact the language has about a million and a half words and of those, about 43,000 are what you might think of as relationship type words: like ‘bigger than’ and so on and so by representing all of the knowledge in that logical language instead of say just collecting all of that in English, what you’re able to do is to have the system do automatic mechanical inference, logical deduction, so that if there is something which logically follows from one or two or 2,000 statements, then Cyc (our system) will grind through automatically and mechanically come up with that entailment.
And so this is really the place where we diverge from everyone else in AI who’s either satisfied with machine learning representation, which is sort of very shallow, almost stimulus response pair-type representation of knowledge; or people who are working in knowledge graphs and triple and quad stores and what people call ontology is these days, and so on which really are almost, you can think of them like three or four word English sentences and there are an awful lot of problems you can solve, just with machine learning. T
There is an even larger set of problems you can solve with machine learning, plus that kind of taxonomic knowledge representation and reasoning. But in order to really capture the full meaning, you really need an expressive logic: something that is as expressive as English. And think in terms of taking one of your podcasts and forcing it to be rewritten as a series of three word sentences. It would be a nightmare. Or imagine taking something like Shakespeare’s Romeo and Juliet, and trying to rewrite that as a set of three or four word sentences. It probably could theoretically be done, but it wouldn’t be any fun to do and it certainly wouldn’t be any fun to read or listen to, if people did that. And yet that’s the tradeoff that people are making. The tradeoff is that if you use that limited a logical representation, then it’s very easy and well understood to efficiently, very efficiently, do the mechanical inference that’s needed.
So if you represent a set is a type of relationships, you can combine them and chain them together and conclude that a nickel is a type of coin or something like that. But there really is this difference between the expressive logics that have been understood by philosophers for over 100 years starting with Frege, and Whitehead and Russell and so on and and others, and the limited logics that others in AI are using today.
And so we essentially started digging this tunnel from the other side and said “We’re going to be as expressive as we have to and we’ll find ways to make it efficient,” and that’s what we’ve done. That’s really the secret of what we’ve done is not just be massive on codification and formalization of all of that common sense knowledge, but finding what turned out to be about 1100 tricks and techniques for speeding up the inferring, the deducing process so that we could get answers in real time instead of involving thousands of years of computation.
The fight against data growth and consolidation was lost a long time ago. Several factors contribute to the increasing amount of data we store in storage systems such as users desire to keep everything they create, organizational policies, new types of rich documents, new applications, and demanding regulations are just some of the culprits.
Many organizations try to eliminate storage silos and consolidate them in large repositories while applying conservative policies to save capacity. It’s time to take a different approach and think about managing data correctly to get the most out of it. Unstructured data, if not correctly managed, could become a huge pain point from both finance and technical perspectives. It is time to think differently and transform this challenge into an opportunity. Let us show you how to get more efficiency and value out of your data!
Join this free 1-hour webinar from GigaOm Research that is bringing together experts in data storage and cloud computing, featuring GigaOm Enrico Signoretti and special guest from Komprise, Krishna Subramanian.
In this 1-hour webinar, you will discover:
The reality beyond data growth and what to expect from the future
Why data storage consolidation failed
Why cloud isn’t a solution if not managed correctly
How to preserve and improve the user experience while changing how your infrastructure works
How to understand the value of stored data to take advantage of it
How data management is evolving and what you should expect in 12-to-18 months.
Register now to join GigaOm and Komprise for this free expert webinar.
Byron Reese: This is Voices in AI brought to you by GigaOm and I’m Byron Reese. Today my guest is Sameer Maskey. He is the founder and CEO of Fusemachines and he’s an adjunct assistant professor at Columbia. He holds an undergraduate degree in Math and Physics from Bates College and a PhD in Computer Science from Columbia University as well. Welcome to the show, Sameer.
Sameer Maskey: Thanks Byron, glad to be here.
Can you recall the first time you ever heard the term ‘artificial intelligence’ or has it always just been kind of a fixture of your life?
It’s always been a fixture of my life. But the first time I heard about it in the way it is understood in today’s world of what AI is, was in my first year undergrad when I was thinking of building talking machines. That was my dream, building a machine that can sort of converse with you. And in doing that research I happened to run into several books on AI and particularly a book called Voice and Speech Synthesis, and that’s how my journey in AI came into fruition.
So a conversational AI, it sounds like something that I mean I assume early on you heard about the Turing Test and thought ‘I wonder how you would build a device that could pass that.’ Is that fair to say?
Yeah, I’d heard about Turing test but my interest stemmed from being able to build a machine that could just talk, read a book and then talk with you about it. And I was particularly interested on being able to build the machine in Nepal. So I grew up in Nepal and I was always interested in building machines that can talk Nepali. So more than the Turing Test was just this notion of ‘can we build a machine that can talk in Nepali and converse with you?’
Would that require a general intelligence or are not anywhere near a general intelligence? For it to be able to like read a book and then have a conversation with you about The Great Gatsby or whatever. Would that require general intelligence?
Being able to build a machine that can read a book and then just talk about it would require I guess what is being termed as artificial general intelligence. That begs many different other kinds of question of what AGI is and how it’s different from AI in what form. But we are still quite far ways from being able to build a machine that can just read a novel or a history book and then just be able to sit down with you and discuss it. I think we are quite far away from it even though there’s a lot of research being done from a conversational AI perspective.
Yeah I mean the minute a computer can learn something, you can just point it at the Internet and say “go learn everything” right?
Exactly. And we’re not there, at all.
Pedro Domingo wrote a book called The Master Algorithm. He said he believes there is like some uber algorithm yet we haven’t discovered which accounts for intelligence in all of its variants, and part of the reason he believes that is, we’re made with shockingly little code DNA. And the amount of that code which is different than a chimp, say, you may only be six or seven mbps in that tiny bit of code. It doesn’t have intelligence obviously, but it knows how to build intelligence. So is it possible that… do you think that that level of artificial intelligence, whether you want to call it AGI or not but that level of AI, do you think that might be a really simple thing that we just haven’t… that’s like right in front of us and we can’t see it? Or do you think it’s going to be a long hard slog to finally get there and it’ll be a piece at a time?
To answer that question and to sort of be able to say maybe there is this Master Algorithm that’s just not discovered, I think it’s hard to make anything towards it, because we as a human being even neurologically and neuroscientists and so forth don’t even fully understand how all the pieces of the cognition work. Like how my four and a half year old kid is just able to learn from couple of different words and put together and start having conversations about it. So I think we don’t even understand how human brains work. I get a little nervous when people claim or suggest there’s this one master algorithm that’s just yet to be discovered.
We had this one trick that is working now where we take a bunch of data about the past and we study it with computers and we look for patterns, and we use those patterns to predict the future. And that’s kind of what we do. I mean that’s machine learning in a nutshell. And it’s hard for me for instance to see how will that ever write The Great Gatsby, let alone read it and understand it, but how could it ever be creative? But maybe it can be.
Through one lens, we’re not that far with AI and why do you think it’s turning out to be so hard? I guess that’s my question. Why is AI so hard? We’re intelligent and we can kind of reflect on our own intelligence and we kind of figure out how we learn things, but we have this brute force way of just cramming a bunch of data down the machine’s throat, and then it can spot spam email or route you through traffic and nothing else. So why is AI turning out to be so hard?
Because I think the machinery that’s been built over many, many years on how AI has evolved and is to a point right now, like you pointed out it is still a lot of systems looking at a lot of historical data, building models that figure out patterns on it and doing predictions on it and it requires a lot of data. And one of the reasons deep learning is working very well is there’s so much data right now.
We haven’t figured out how, with a very little bit of data you can create generalization on the patterns to be able to do things. And that piece on how to model or build a machine that can generalize decision making process based on just a few pieces of information… we haven’t figured that out. And until we figure that out, it is still going to be very hard to make AGI or a system that can just write The Great Gatsby. And I don’t know how long will it be until we figure that part out.
A lot of times people think that a general intelligence is just an evolutionary product from narrow. We get narrow then we get…First they can play Go and then it can play all games, all strategy games. And then it can do this and it gets better and better and then one day it’s general.
Is it possible that what we know how to do now has absolutely nothing to do with general intelligence? Like we haven’t even started working on that problem, it’s a completely different problem. All we’re able to do is make things that can fake intelligence, but we don’t know how to make anything that’s really intelligent. Or do you think we are on a path that’s going to just get better and better and better until one day we have something that can make coffee and play Go and compose sonnets?
There is some new research being done on AGI, but the path right now which is where we train more and more data on bigger and bigger architecture and sort of simulate our fake intelligence, I don’t think that would probably lead into solutions that can have general intelligence the way we are talking about. It is still a very similar model that we’ve been using before, and that’s been invented a long time ago.
They are much more popular right now because they can do more with more data with more compute power and so forth. So when it is able to drive a car based on computer vision and neural net and learning behind it, it simulates intelligence. But it’s not really probably the way we describe human intelligence, so that it can write books and write poetry. So are we on the path to AGI? I don’t think that with the current evolution of the way the machinery is done is probably going to lead you to AGI. There’s probably some fundamental new ways of exploring things that is required and how the problem is framed to sort of thinking about how general intelligence works.
Byron Reese: This is Voices in AI brought to you by GigaOm, and I’m Byron Reese. Today my guest is Amir Husain. He is the founder and CEO of SparkCognition Inc., and he’s the author of The Sentient Machine, a fine book about artificial intelligence. In addition to that, he is a member of the AI task force with the Center for New American Security. He is a member of the board of advisors at UT Austin’s Department of Computer Science. He’s a member of the Council on Foreign Relations. In short, he is a very busy guy, but has found 30 minutes to join us today. Welcome to the show, Amir.
Amir Husain: Thank you very much for having me Byron. It’s my pleasure.
Byron, I wrote this book because I thought that there was a lot of writing on artificial intelligence—what it could be. There’s a lot of sci fi that has visions of artificial intelligence and there’s a lot of very technical material around where artificial intelligence is as a science and as a practice today. So there’s a lot of that literature out there. But what I also saw was there was a lot of angst back in 2015, 2014. I actually had a personal experience in that realm where outside of my South by Southwest talks there was an anti-AI protest.
So just watching those protesters and seeing what their concerns were, I felt that a lot of the sort of philosophical questions, existential questions around the advent of AI, if AI indeed ends up being like Commander Data, it has sentience, it becomes artificial general intelligence, then it will be able to do the jobs better than we can and it will be more capable in let’s say the ‘art of war’ than we are and therefore does this mean that we will lose our jobs. We will be meaningless and our lives will be lacking in meaning and maybe the AI will kill us?
These are the kinds of concerns that people have had around AI and I wanted to sort of reflect on notions of man’s ability to create—the aspects around that that are embedded in our historical and religious tradition and what our conception of Man vs. he who can create, our creator—what those are and how that influences how we see this age of AI where man might be empowered to create something which can in turn create, which can in turn think.
There’s a lot of folks also that feel that this is far away, and I am an AI practitioner and I agree I don’t think that artificial general intelligence is around the corner. It’s not going to happen next May, even though I suppose some group could surprise us, but the likely outcome is that we are going to wait a few decades. I think waiting a few decades isn’t a big deal because in the grand scheme of things, in the history of the human race, what is a few decades? So ultimately the questions are still valid and this book was written to address some of those existential questions lurking in elements of philosophy, as well as science, as well as the reality of where AI stands at the moment.
So talk about those philosophical questions just broadly. What are those kinds of questions that will affect what happens with artificial intelligence?
Well I mean one question is a very simple one of self-worth. We tend to define ourselves by our capabilities and the jobs that we do. Many of our last names in many cultures are literally indicative of our profession. You know goldsmiths as an example, farmer as an example. And this is not just a European thing. Across the world you see this phenomenon of last names just reflecting the profession of a woman or a man. And it is to this extent that we internalize the jobs that we do as essentially being our identity, literally to the point where we take it on as a name.
So now when you de-link a man or a woman’s ability to produce or to engage in that particular labor that is a part of their identity, then what’s left? Are you still, the human that you were with that skill? Are you less of a human being? Is humanity in any way linked to your ability to conduct this kind of economic labor? And this is one question that I explored in the book because I don’t know whether people really contemplate this issue so directly and think about it in philosophical terms, but I do know that subjectively people get depressed when they’re confronted with the idea that they might not be able to do the job that they are comfortable doing or have been comfortable doing for decades. So at some level obviously it’s having an impact.
And the question then is: is our ability to perform a certain class of economic labor in any way intrinsically connected to identity? Is it part of humanity? And I sort of explore this concept and I say “OK well, let’s sort of take this away and let’s cut this away let’s take away all of the extra frills, let’s take away all of what is not absolutely fundamentally uniquely human.” And that was an interesting exercise for me. The conclusions that I came to—I don’t know whether I should spoil the book by sharing it here—but in a nutshell—this is no surprise—that our cognitive function, our higher order thinking, our creativity, these are the things which make us absolutely unique amongst the known creation. And it is that which makes us unique and different. So this is one question of self worth in the age of AI, and another one is…
Just to put a pin in that for a moment, in the United States the workforce participation rate is only about 50% to begin with, so only about 50% of people work because you’ve got adults that are retired, you have people who are unable to work, you have people that are independently wealthy… I mean we already had like half of adults not working. Does it does it really rise to the level of a philosophical question when it’s already something we have thousands of years of history with? Like what are the really needy things that AI gets at? For instance, do you think a machine can be creative?
Absolutely I think the machine can be creative.
You think people are machines?
I do think people are machines.
So then if that’s the case, how do you explain things like the mind? How do you think about consciousness? We don’t just measure temperature, we can feel warmth, we have a first person experience of the universe. How can a machine experience the world?
Well you know look there’s this age old discussion about qualia and there’s this discussion about the subjective experience, and obviously that’s linked to consciousness because that kind of subjective experience requires you to first know of your own existence and then apply the feeling of that experience to you in your mind. Essentially you are simulating not only the world but you also have a model of yourself. And ultimately in my view consciousness is an emergent phenomenon.
You know the very famous Marvin Minsky hypothesis of The Society of Mind. And in all of its details I don’t know that I agree with every last bit of it, but the basic concept is that there are a large number of processes that are specialized in different things that are running in the mind, the software being the mind, and the hardware being the brain, and that the complex interactions of a lot of these things result in something that looks very different from any one of these processes independently. This in general is a phenomenon that’s called emergence. It exists in nature and it also exists in computers.
One of the first few graphical programs that I wrote as a child in basic [coding] was drawing straight lines, and yet on a CRT display, what I actually saw were curves. I’d never drawn curves but it turns out that when you light a large number of pixels with a certain gap in the middle and it’s on a CRT display there there are all sorts of effects and interactions like the Moire effect and so on and so forth where what you thought you were drawing was lines, and it shows up if you look at it from an angle, as curves.
So I mean the process of drawing a line is nothing like drawing a curve, there was no active intent or design to produce a curve, the curve just shows up. It’s a very simple example of a kid writing a few lines of basic can do this experiment and look at this but there are obviously more complex examples of emergence as well. And so consciousness to me is an emergent property, it’s an emergent phenomenon. It’s not about the one thing.
I don’t think there is a consciousness gland. I think that there are a large number of processes that interact to produce this consciousness. And what does that require? It requires for example a complex simulation capability which the human brain has, the ability to think about time, to think about objects, model them and to also apply your knowledge of physical forces and other phenomena within your brain to try and figure out where things are going.
So that simulation capability is very important, and then the other capability that’s important is the ability to model yourself. So when you model yourself and you put yourself in a simulator and you see all these different things happening, there is not the real pain that you experience when you simulate for example being struck by an arrow, but there might be some fear and a why is that fear emanating? It’s because you watch your own model in your imagination, in your simulation suffer some sort of a problem. And now that is a very internal. Right? None of this has happened in the external world but you’re conscious of this happening, so to me at the end of the day it has some fundamental requirements. I believe simulation and self modeling are two of those requirements, but ultimately it’s an emergent property.
In the era of data lakes and digital transformation, Data catalogs are taking on new functionality and revitalized importance. You’d be hard put to manage your data landscape responsibly without a data catalog, but just because catalogs are needed doesn’t mean they are just a dry requirement. Done right, data catalogs can make self-service analytics across the Enterprise far more productive, widely-adopted and — dare we say it — fun.
Data catalogs can facilitate frictionless discoverability, fostering the user enthusiasm needed for a robust data culture. They help users find the data they want and discover data they may not have known was available. Machine learning/AI-driven recommendations help users too, especially when they’re security-aware and based on what other users have found useful.
People are an essential part of successful data catalog implementations, and social collaboration features help spread analytics enthusiasm. Peer endorsement of data sets, user-contributed documentation and peer-to-peer communication are all important; and motivate users toward successful adoption.
To learn more about the benefits of grassroots data catalog implementations and how to get teams excited to join in on all the fun, join us for this free 1-hour webinar from GigaOm Research. The Webinar features GigaOm analyst Andrew Brust and special guest, Aaron Kalb, Co-founder & VP of Design and Strategic Initiatives from Alation, a frontrunner in the data catalog arena.
In this 1-hour webinar, you will discover:
Why data catalogs have emerged so prominently in the data lake/BI/analytics world
How data catalogs can be user-positive, not just tools for IT
The role of machine learning and AI in data catalog success
Where governed personally identifiable information (PII) and business glossary connectivity tie in
Register now to join GigaOm and Alation for this free expert webinar.