Loading...

Follow Gigaom » Big Data on Feedspot

Continue with Google
Continue with Facebook
or

Valid

Pillars of Readiness

By now, the story is well told on how artificial intelligence is changing businesses, all the way down to impacting core business models. In 2017, Amazon bought Whole Foods then later opened their Amazon Go automated store. Since then, they’ve been using AI to understand and improve the physical retail shopping experience. In 2018, Keller Williams announced a pivot towards becoming an artificial intelligence-driven technology company to compete with the tech-centric entrants into the market like Zillow and Redfin.

These companies are not alone. According to a study by MIT Sloan Management Review, one trillion dollars of new profit will be created from the use of artificial intelligence technologies by 2030. That is roughly 10% of all total profits projected for that time. Still, most companies have yet to implement artificial intelligence in their business. Depending on what study you ready, 70%-80% of all businesses have yet to begin any AI implementation whatsoever.

The reality is most companies are just not ready for AI, and if they try before they are ready, they will fail. For AI projects to be successful, you likely need to shore up a few areas.

6 Pillars of AI Readiness:

  • Culture
  • Data
  • Strategy
  • Technology
  • Expertise
  • Operations

Regardless of the level of expertise in your company or ability to invest, achieving meaningful results from artificial intelligence requires six key areas to be optimal tuned for success to be achieved. Even if a small portion of the profit forecasts are true, readiness matters.

Symptoms of Readiness

If AI is the answer, what is the problem? Many companies still struggle with a general understanding of how AI can make a meaningful impact. They don’t realize there are common challenges that plague most businesses where artificial intelligence can provide solutions. Identifying these challenges are signs that your company may be ready to see benefit from artificial intelligence technologies.

Symptoms of Readiness:

  • Mundane tasks prone to error
  • Not enough people to get jobs done
  • Need creative ways to get data
  • Desire to predict trends or make better decisions
  • Seeking new business models or to enter new markets

If any of these problems are relevant or a priority, AI has well documented benefits. Artificial Intelligence today is best suited to automate tasks, predict phenomenon, and even generate more data.

Leveraging AI for data generation gets less pickup. To level-set, data is the most important component of this whole equation (more on that later). We are also surrounded by data exhaust, where very little is captured and processed for meaningful intelligence. For example, computer vision and optical character recognition can be used to extract data from paper contracts or receipts to use to make future predictions.

Without a Culture of Innovation, You Are Not Ready For AI.

A company’s culture is paramount for embracing data and enhanced capabilities. Amazon, Keller Williams, Google, Facebook, and Walmart all have a track record of innovation. They have people and resources dedicated to the research and development of new ideas. Businesses must be fearless in courting innovation and not afraid to spend money for fear of failing in search of success. A willingness to embrace and invest in innovation is a must.

Along with innovation, organizations must see data as a corporate asset. The business and culture must value data and be invested in collecting it. In the future there will be far more cultural considerations that will dictate what and how AI will be adopted. Issues like privacy, explainability, and ethics will all be cultural considerations, dictating where the technology will and won’t be applied.

Without Sufficient Quantity and Quality Data, Artificial Intelligence Won’t Work.

By now it should be no surprise that data is the lifeblood of artificial intelligence. Without data, the algorithms cannot learn and make predictions. Data must be present in sufficient quantity AND quality. Mountains of data may not be enough if signal about the phenomenon you are looking to learn does not exist.

Many AI Pioneers already have robust data and analytics infrastructures along with a broad understanding of what it takes to develop the data for training AI algorithms. AI Investigators and Experimenters, by contrast, struggle because they have little analytics expertise and keep their data largely in silos, where it is difficult to integrate.

(Report: MIT Sloan Management Review)

In fact, 90% of the effort to deploy AI solutions lies in wrangling data and feature engineering. The more high-quality data, the more accurate the predictions. Bad data is the number one reason most AI projects fail.

Without a strategy, AI solutions risk never making it into production.

As stated before, the business must value data as a corporate asset. That is fundamental to your strategy. However, the thinking must go further. Any AI program must be tightly aligned to support the corporate strategy. Artificial intelligence is an enhanced capability to achieve your business goals.

Companies committed to adopting AI need to make sure their strategies are transformational and should make AI central to revising their corporate strategies.

(Report: MIT Sloan Management Review)

AI for AI’s sake often leads to long, drawn-out projects that never produce any real value. CEO support is the ideal. Executive sponsors are critical to ensure proper alignment, set business metrics for any technology implementation, and provide air cover against any disputes over data or technology involvement.

AI entrepreneur Jordan Jacobs lays out the three ingredients for a winning strategy:

Getting buy-in from the top executives and the employees who will use the system, identifying clearly the business problem to be solved, and setting metrics to demonstrate the technology’s return on investment.

(Jordan Jacobs)

According to MIT Technology Review, there are key questions a business must answer to formulate their strategy:

  • What is the problem the business is trying to solve? Is this problem amenable to AI and machine learning?
  • How will the AI solve the problem? How has the business problem been reframed into a machine-learning problem? What data will be needed to input into the algorithms?
  • Where does the company source its data? How is the data labeled?
  • How often does the company validate, test and audit its algorithms for accuracy and bias?
  • Is AI, or machine learning, the best and only way to solve this problem? Do the benefits outweigh potential privacy violations and other negative impacts?

Without cloud-based technologies, many AI solutions can’t operate.

For most businesses, embracing cloud-based computing and storage technologies are critical for AI programs to produce effectively. Artificial intelligence models require tremendous compute power to process massive data sets. This requires businesses to have ready access to computer power on-demand.

Since 2012, the amount of computation used in the largest AI training runs has been increasing exponentially with a 3.5 month doubling time (by comparison, Moore’s Law had an 18 month doubling period).

(OpenAI)

If AI programs are to be successful, companies need to embrace cloud technologies. They need to be willing to adopt platforms that provision GPU clusters based on workloads required by deployed models. Cloud is important because owning the hardware can cost over a million dollars for a single cluster, according to OpenAI.

Without internal expertise, AI adoption is challenging.

To successfully move AI projects through the development life-cycles, from data to production, you need to have in-house technical expertise. At minimum, you need to have dedicated data managers who can help wrangle data to train models. It is important to have software engineers or DevOps leads who can help move trained models into production environments so non-technical stakeholders can easily run reports. These two roles can be augmented by services providers that build and train models. However, it is better if you have data scientists, analysts, and data engineers who can help strategize and execute projects as well.

Without well-defined technology processes, projects risk never making it into production.

Another common risk associated with AI projects is that trained models don’t transition into production. It is important to have a connection between the business strategy and the delivery of AI technology. It’s especially important to have a plan in place for how to access the data, train the model, and handoff the model to be deployed as a usable solution.

Develop an operating model that defines AI roles and responsibilities for the business, technology teams, external vendors, and existing BPM centers of excellence. Some firms will separate model development — i.e., selecting and developing the AI, data science, and algorithms — from implementation. But over time, tech management will own deployment and operational management.

(Report: Forrester)

Moving forward it will be important to have documented processes for post-deployment. Once multiple models are in production you need to monitor models for performance and have a documented process for re-training.

Take an AI readiness assessment.

Many companies are not fully equipped to realize the full benefits created by artificial intelligence. And it is hard to know if “good enough” is good enough. Start by evaluating how equipped your business may be to successfully deploy projects. There are many ways to assess your readiness and de-risk investment. Online AI readiness assessments will help you start to understand if your organization has the prerequisites to successfully execute initial projects. If you are not ready, there is a lot of opportunity at stake. The most valuable thing you can do is to start to get ready. If you have executive buy-in, partner with consultancies or hire an AI strategist who can help put the pieces in place.

Originally Posted on KUNGFU.AI

Read Full Article
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

[voices_in_ai_byline]

About this Episode

Episode 89 of Voices in AI features Byron speaking with Cycorp CEO Douglas Lenat on developing AI and the very nature of intelligence.

Listen to this episode or read the full transcript at www.VoicesinAI.com

Transcript Excerpt

Byron Reese: This is Voices in AI brought to you by GigaOm, and I’m Byron Reese. I couldn’t be more excited today. My guest is Douglas Lenat. He is the CEO of Cycorp of Austin, Texas where GigaOm is based, and he’s been a prominent researcher in AI for a long time. He’s been awarded the biannual IJCAI computer and thought award in 1976. He created the machine learning program AM. He worked on (symbolic, not statistical) machine learning with his AM and Eurisko programs, knowledge representation, cognitive economy, blackboard systems and what he dubbed in 1984 as “ontological engineering.”

He’s worked in military simulations, numerous projects for the government for intelligence, with scientific organizations. In 1980 he published a critique of conventional random mutation Darwinism. He authored a series of articles in The Journal of Artificial Intelligence exploring the nature of heuristic rules. But that’s not all: he was one of the original Fellows of the Triple AI. And he’s the only individual to observe on the scientific advisory board of both Apple and Microsoft. He is a Fellow of the Triple AI and the cognitive science society, one of the original founders of TTI/ Vanguard in 1991. And on and on and on… and he was named one of the WIRED 25. Welcome to the show!

Douglas Lenat: Thank you very much Byron, my pleasure.

I have been so looking forward to our chat and I would just love, I mean I always start off asking what artificial intelligence is and what intelligence is. And I would just like to kind of jump straight into it with you and ask you to explain, to bring my listeners up to speed with what you’re trying to do with the question of common sense and artificial intelligence.

I think that the main thing to say about intelligence is that it’s one of those things that you recognize it when you see it, or you recognize it in hindsight. So intelligence to me is not just knowing things, not just having information and knowledge but knowing when and how to apply it, and actually successfully applying it in those cases. And what that means is that it’s all well and good to store millions or billions of facts.

But intelligence really involves knowing the rules of thumb, the rules of good judgment, the rules of good guessing that we all almost take for granted in our everyday life in common sense, and that we may learn painfully and slowly in some field where we’ve studied and practiced professionally, like petroleum engineering or cardiothoracic surgery or something like that. And so common sense rules like: bigger things can’t fit into smaller things. And if you think about it, every time that we say anything or write anything to other people, we are constantly injecting into our sentences pronouns and ambiguous words and metaphors and so on. We expect the reader or the listener has that knowledge, has that intelligence, has that common sense to decode, to disambiguate what we’re saying.

So if I say something like “Fred couldn’t put the gift in the suitcase because it was too big,” I don’t mean the suitcase was too big, I must mean that the gift was too big. In fact if I had said “Fred can’t put the gift in the suitcase because it’s too small” then obviously it would be referring to the suitcase. And there are millions, actually tens of millions of very general principles about how the world works: like big things can’t fit into smaller things, that we all assume that everybody has and uses all the time. And it’s the absence of that layer of knowledge which has made artificial intelligence programs so brittle for the last 40 or 50 years.

My number one question I ask every [AI is a] Turing test sort of thing, [which] is: what’s bigger a nickel or the sun? And there’s never been one that’s been able to answer it. And that’s the problem you’re trying to solve.

Right. And I think that there’s really two sorts of phenomena going on here. One is understanding the question and knowing the sense in which you’re talking about ‘bigger.’ One in the sense of perception if you’re holding up a nickel in front of your eye and so on and the other of course, is objectively knowing that the sun is actually quite a bit larger than a typical nickel and so on.

And so one of the things that we have to bring to bear, in addition to everything I already said, are Grice’s rules of communicating between human beings where we have to assume that the person is asking us something which is meaningful. And so we have to decide what meaningful question would they really possibly be having in mind like if someone says “Do you know what time it is?” It’s fairly juvenile and jerky to say “yes” because obviously what they mean is: please tell me the time and so on. And so in the case of the nickel and the sun, you have to disambiguate whether the person is talking about a perceptual phenomenon or an actual unstated physical reality.

So I wrote an article that I put a lot of time and effort into and I really liked it. I ran it on GigaOm and it was 10 questions that Alexa and Google Home answered differently but objectively. They should have been identical, and in every one I kind of tried to dissect what went wrong.

And so I’m going to give you two of them and my guess is you’ll probably be able to intuit in both of them what the answer, what the problem was. The first one was: who designed the American flag? And they gave me different answers. One said “Betsy Ross,” and one said “Robert Heft,” so why do you think that happened?

All right so in some sense, both of them are doing what you might call an ‘animal level intelligence’ of not really understanding what you’re asking at all. But in fact doing the equivalent of (I won’t even call it natural language processing), let’s call it ‘string processing,’ looking at processed web pages, looking for the confluence, and preferably in the same order, of some of the words and phrases that were in your question and looking for essentially sentences of the form: X designed the U.S. flag or something.

And it’s really no different than if you ask, “How tall is the Eiffel Tower?” and you get two different answers: one based on answering from the one in Paris and one based on the one in Las Vegas. And so it’s all well and good to have that kind of superficial understanding of what it is you’re actually asking, as long as the person who’s interacting with the system realizes that the system isn’t really understanding them.

It’s sort of like your dog fetching a newspaper for you. It’s something which is you know wagging its tail and getting things to put in front of you, and then you as the person who has intelligence has to look at it and disambiguate what does this answer actually imply about what it thought the question was, as it were, or what question is it actually answering and so on.

But this is one of the problems that we experienced about 40 years ago in artificial intelligence in the in the 1970s. We built AI systems using what today would be very clearly a neural net technology. Maybe there’s been one small tweak in that field that’s worth mentioning involving additional hidden layers and convolution, and we built a AIs using symbolic reasoning that used logic much like our Cyc system does today.

And again the actual representation looks very similar to what it does today and there had to be a bunch of engineering breakthroughs along the way to make that happen. But essentially in the 1970s we built AIs that were powered by the same two sources of power you find today, but they were extremely brittle and they were brittle because they didn’t have common sense. They didn’t have that kind of knowledge that was necessary in order to understand the context in which things were said, in order to understand the full meaning of what was said. They were just superficially reasoning. They had the veneer of intelligence.

We might have a system which was the world’s expert at deciding what kind of meningitis a patient might be suffering from. But if you told it about your rusted out old car or you told it about someone who is dead, the system would blithely tell you what kind of meningitis they probably were suffering from because it simply didn’t understand things like inanimate objects don’t get human diseases and so on.

And so it was clear that somehow we had to pull the mattress out of the road in order to let traffic toward real AI proceed. Someone had to codify the tens of millions of general principles like non humans don’t get human diseases, and causes don’t happen before their effects, and large things don’t fit into smaller things, and so on, and that it was very important that somebody do this project.

We thought we were actually going to have a chance to do it with Alan Kay at the Atari research lab and he assembled a great team. I was a professor at Stanford in computer science at the time, so I was consulting on that, but that was about the time that Atari peaked and then essentially had financial troubles as did everyone in the video game industry at that time, and so that project splintered into several pieces. But that was the core of the idea that somehow someone needed to collect all this common sense and represent it and make it available to make our AIs less brittle.

And then an interesting thing happened: right at that point in time when I was beating my chest and saying ‘hey someone please do this,’ which was America was frightened to hear that the Japanese had announced something they called the ‘fifth generation computing effort.’ Japan basically threatened to do in computing hardware and software and AI what they had just finished doing in consumer electronics, and in the automotive industry: namely wresting leadership away from the West. And so America was very scared.

Congress passed something that’s how you can tell it was many decades ago. Congress quickly passed something, which was called the National Cooperative Research Act, which basically said ‘hey all you large American companies: normally if you colluded on R & D, we would prosecute you for antitrust violations, but for the next 10 years, we promise we won’t do that.’ And so around 1981 a few research consortia sprang up in the United States for the first time in computing and hardware and artificial intelligence and the first one of those was right here in Austin. It was called MCC, the Microelectronics and Computer Technology Corporation. Twenty five large American companies each contributed a small number of millions of dollars a year to fund high risk, high payoff, long term R & D projects, projects that might take 10 or 20 or 30 or 40 years to reach fruition, but which, if they succeeded, could help keep America competitive.

And Admiral Bob Inman who’s also an Austin resident, one of my favorite people, one of the smartest and nicest people I’ve ever met, was the head of MCC and he came and visited me at Stanford and said “Hey look Professor, you’re making all this noise about what somebody ought to do. You have six or seven graduate students. If you do that here if it’s going to take you a few thousand person years. That means it’s going to take you a few hundred years to do that project. If you move to the wilds of Austin, Texas and we put in ten times that effort, then you’ll just barely live to see the end of it a few decades from now.”

And that was a pretty convincing argument, and in some sense that is the summary of what I’ve been doing for the last 35 years here is taking time off from research to do an engineering project, a massive engineering project called Cycorp, which is collecting that information and representing it formally, putting it all in one place for the first time.

And the good news is since you’ve waited thirty five years to talk to me Byron, is that we’re nearing completion which is a very exciting phase to be in. And so most of our funding these days at Cycorp doesn’t come from the government anymore, doesn’t come from just a few companies anymore, it comes from a large number of very large companies that are actually putting our technology into practice, not just funding it for research reasons.

So that’s big news. So when you have it all, and to be clear, just to summarize all of that: you’ve spent the last 35 years working on a system of getting all of these rules of thumb like ‘big things can’t go in small things,’ and to list them all out every one of them (dark things are darker than light things). And then not just list them like in an Excel spreadsheet, but to learn how to express them all in ways that they can be programmatically used.

So what do you have in the end when you have all of that? Like when you turn it on, will it tell me which is bigger: a nickel or the sun?

Sure. And in fact most of the questions that you might ask that you might think of as any one ought to be able to answer this question, Cyc is actually able to do a pretty good job of. It doesn’t understand that unrestricted natural language, so sometimes we’ll have to encode the question in logic in a formal language, but the language is pretty big. In fact the language has about a million and a half words and of those, about 43,000 are what you might think of as relationship type words: like ‘bigger than’ and so on and so by representing all of the knowledge in that logical language instead of say just collecting all of that in English, what you’re able to do is to have the system do automatic mechanical inference, logical deduction, so that if there is something which logically follows from one or two or 2,000 statements, then Cyc (our system) will grind through automatically and mechanically come up with that entailment.

And so this is really the place where we diverge from everyone else in AI who’s either satisfied with machine learning representation, which is sort of very shallow, almost stimulus response pair-type representation of knowledge; or people who are working in knowledge graphs and triple and quad stores and what people call ontology is these days, and so on which really are almost, you can think of them like three or four word English sentences and there are an awful lot of problems you can solve, just with machine learning. T

There is an even larger set of problems you can solve with machine learning, plus that kind of taxonomic knowledge representation and reasoning. But in order to really capture the full meaning, you really need an expressive logic: something that is as expressive as English. And think in terms of taking one of your podcasts and forcing it to be rewritten as a series of three word sentences. It would be a nightmare. Or imagine taking something like Shakespeare’s Romeo and Juliet, and trying to rewrite that as a set of three or four word sentences. It probably could theoretically be done, but it wouldn’t be any fun to do and it certainly wouldn’t be any fun to read or listen to, if people did that. And yet that’s the tradeoff that people are making. The tradeoff is that if you use that limited a logical representation, then it’s very easy and well understood to efficiently, very efficiently, do the mechanical inference that’s needed.

So if you represent a set is a type of relationships, you can combine them and chain them together and conclude that a nickel is a type of coin or something like that. But there really is this difference between the expressive logics that have been understood by philosophers for over 100 years starting with Frege, and Whitehead and Russell and so on and and others, and the limited logics that others in AI are using today.

And so we essentially started digging this tunnel from the other side and said “We’re going to be as expressive as we have to and we’ll find ways to make it efficient,” and that’s what we’ve done. That’s really the secret of what we’ve done is not just be massive on codification and formalization of all of that common sense knowledge, but finding what turned out to be about 1100 tricks and techniques for speeding up the inferring, the deducing process so that we could get answers in real time instead of involving thousands of years of computation.

Listen to this episode or read the full transcript at www.VoicesinAI.com

[voices_in_ai_link_back]

Byron explores issues around artificial intelligence and conscious computers in his new book The Fourth Age: Smart Robots, Conscious Computers, and the Future of Humanity.

Read Full Article
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

The fight against data growth and consolidation was lost a long time ago. Several factors contribute to the increasing amount of data we store in storage systems such as users desire to keep everything they create, organizational policies, new types of rich documents, new applications, and demanding regulations are just some of the culprits.

Many organizations try to eliminate storage silos and consolidate them in large repositories while applying conservative policies to save capacity.  It’s time to take a different approach and think about managing data correctly to get the most out of it. Unstructured data, if not correctly managed, could become a huge pain point from both finance and technical perspectives. It is time to think differently and transform this challenge into an opportunity. Let us show you how to get more efficiency and value out of your data!

Join this free 1-hour webinar from GigaOm Research that is bringing together experts in data storage and cloud computing, featuring GigaOm Enrico Signoretti and special guest from Komprise, Krishna Subramanian.

In this 1-hour webinar, you will discover:

  • The reality beyond data growth and what to expect from the future
  • Why data storage consolidation failed
  • Why cloud isn’t a solution if not managed correctly
  • How to preserve and improve the user experience while changing how your infrastructure works
  • How to understand the value of stored data to take advantage of it
  • How data management is evolving and what you should expect in 12-to-18 months.

Register now to join GigaOm and Komprise for this free expert webinar.

Who Should Attend:

  • CxOs
  • IT Professionals
  • Storage Decision makers
Read Full Article
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

[voices_in_ai_byline]

About this Episode

Episode 87 of Voices in AI features Byron speaking with Sameer Maskey of Fusemachines about the development of machine learning, languages and AI capabilities.

Listen to this one-hour episode or read the full transcript at www.VoicesinAI.com

Transcript Excerpt

Byron Reese: This is Voices in AI brought to you by GigaOm and I’m Byron Reese. Today my guest is Sameer Maskey. He is the founder and CEO of Fusemachines and he’s an adjunct assistant professor at Columbia. He holds an undergraduate degree in Math and Physics from Bates College and a PhD in Computer Science from Columbia University as well. Welcome to the show, Sameer.

Sameer Maskey: Thanks Byron, glad to be here.

Can you recall the first time you ever heard the term ‘artificial intelligence’ or has it always just been kind of a fixture of your life?

It’s always been a fixture of my life. But the first time I heard about it in the way it is understood in today’s world of what AI is, was in my first year undergrad when I was thinking of building talking machines. That was my dream, building a machine that can sort of converse with you. And in doing that research I happened to run into several books on AI and particularly a book called Voice and Speech Synthesis, and that’s how my journey in AI came into fruition.

So a conversational AI, it sounds like something that I mean I assume early on you heard about the Turing Test and thought ‘I wonder how you would build a device that could pass that.’ Is that fair to say?

Yeah, I’d heard about Turing test but my interest stemmed from being able to build a machine that could just talk, read a book and then talk with you about it. And I was particularly interested on being able to build the machine in Nepal. So I grew up in Nepal and I was always interested in building machines that can talk Nepali. So more than the Turing Test was just this notion of ‘can we build a machine that can talk in Nepali and converse with you?’

Would that require a general intelligence or are not anywhere near a general intelligence? For it to be able to like read a book and then have a conversation with you about The Great Gatsby or whatever. Would that require general intelligence?

Being able to build a machine that can read a book and then just talk about it would require I guess what is being termed as artificial general intelligence. That begs many different other kinds of question of what AGI is and how it’s different from AI in what form. But we are still quite far ways from being able to build a machine that can just read a novel or a history book and then just be able to sit down with you and discuss it. I think we are quite far away from it even though there’s a lot of research being done from a conversational AI perspective.

Yeah I mean the minute a computer can learn something, you can just point it at the Internet and say “go learn everything” right?

Exactly. And we’re not there, at all.

Pedro Domingo wrote a book called The Master Algorithm. He said he believes there is like some uber algorithm yet we haven’t discovered which accounts for intelligence in all of its variants, and part of the reason he believes that is, we’re made with shockingly little code DNA. And the amount of that code which is different than a chimp, say, you may only be six or seven mbps in that tiny bit of code. It doesn’t have intelligence obviously, but it knows how to build intelligence. So is it possible that… do you think that that level of artificial intelligence, whether you want to call it AGI or not but that level of AI, do you think that might be a really simple thing that we just haven’t… that’s like right in front of us and we can’t see it? Or do you think it’s going to be a long hard slog to finally get there and it’ll be a piece at a time?

To answer that question and to sort of be able to say maybe there is this Master Algorithm that’s just not discovered, I think it’s hard to make anything towards it, because we as a human being even neurologically and neuroscientists and so forth don’t even fully understand how all the pieces of the cognition work. Like how my four and a half year old kid is just able to learn from couple of different words and put together and start having conversations about it. So I think we don’t even understand how human brains work. I get a little nervous when people claim or suggest there’s this one master algorithm that’s just yet to be discovered.

We had this one trick that is working now where we take a bunch of data about the past and we study it with computers and we look for patterns, and we use those patterns to predict the future. And that’s kind of what we do. I mean that’s machine learning in a nutshell. And it’s hard for me for instance to see how will that ever write The Great Gatsby, let alone read it and understand it, but how could it ever be creative? But maybe it can be.

Through one lens, we’re not that far with AI and why do you think it’s turning out to be so hard? I guess that’s my question. Why is AI so hard? We’re intelligent and we can kind of reflect on our own intelligence and we kind of figure out how we learn things, but we have this brute force way of just cramming a bunch of data down the machine’s throat, and then it can spot spam email or route you through traffic and nothing else. So why is AI turning out to be so hard?

Because I think the machinery that’s been built over many, many years on how AI has evolved and is to a point right now, like you pointed out it is still a lot of systems looking at a lot of historical data, building models that figure out patterns on it and doing predictions on it and it requires a lot of data. And one of the reasons deep learning is working very well is there’s so much data right now.

We haven’t figured out how, with a very little bit of data you can create generalization on the patterns to be able to do things. And that piece on how to model or build a machine that can generalize decision making process based on just a few pieces of information… we haven’t figured that out. And until we figure that out, it is still going to be very hard to make AGI or a system that can just write The Great Gatsby. And I don’t know how long will it be until we figure that part out.

A lot of times people think that a general intelligence is just an evolutionary product from narrow. We get narrow then we get…First they can play Go and then it can play all games, all strategy games. And then it can do this and it gets better and better and then one day it’s general.

Is it possible that what we know how to do now has absolutely nothing to do with general intelligence? Like we haven’t even started working on that problem, it’s a completely different problem. All we’re able to do is make things that can fake intelligence, but we don’t know how to make anything that’s really intelligent. Or do you think we are on a path that’s going to just get better and better and better until one day we have something that can make coffee and play Go and compose sonnets?

There is some new research being done on AGI, but the path right now which is where we train more and more data on bigger and bigger architecture and sort of simulate our fake intelligence, I don’t think that would probably lead into solutions that can have general intelligence the way we are talking about. It is still a very similar model that we’ve been using before, and that’s been invented a long time ago.

They are much more popular right now because they can do more with more data with more compute power and so forth. So when it is able to drive a car based on computer vision and neural net and learning behind it, it simulates intelligence. But it’s not really probably the way we describe human intelligence, so that it can write books and write poetry. So are we on the path to AGI? I don’t think that with the current evolution of the way the machinery is done is probably going to lead you to AGI. There’s probably some fundamental new ways of exploring things that is required and how the problem is framed to sort of thinking about how general intelligence works.

Listen to this one-hour episode or read the full transcript at www.VoicesinAI.com

[voices_in_ai_link_back]

Byron explores issues around artificial intelligence and conscious computers in his new book The Fourth Age: Smart Robots, Conscious Computers, and the Future of Humanity.

Read Full Article
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

[voices_in_ai_byline]

About this Episode

Episode 86 of Voices in AI features Byron speaking with fellow author Amir Husain about the nature of Artificial Intelligence and Amir’s book The Sentient Machine.

Listen to this one-hour episode or read the full transcript at www.VoicesinAI.com

Transcript Excerpt

Byron Reese: This is Voices in AI brought to you by GigaOm, and I’m Byron Reese. Today my guest is Amir Husain. He is the founder and CEO of SparkCognition Inc., and he’s the author of The Sentient Machine, a fine book about artificial intelligence. In addition to that, he is a member of the AI task force with the Center for New American Security. He is a member of the board of advisors at UT Austin’s Department of Computer Science. He’s a member of the Council on Foreign Relations. In short, he is a very busy guy, but has found 30 minutes to join us today. Welcome to the show, Amir.

Amir Husain: Thank you very much for having me Byron. It’s my pleasure.

You and I had a cup of coffee a while ago and you gave me a copy of your book and I’ve read it and really enjoyed it. Why don’t we start with the book. Talk about that a little bit and then we’ll talk about SparkCognition Inc. Why did you write The Sentient Machine: The Coming Age of Artificial Intelligence?

Byron, I wrote this book because I thought that there was a lot of writing on artificial intelligence—what it could be. There’s a lot of sci fi that has visions of artificial intelligence and there’s a lot of very technical material around where artificial intelligence is as a science and as a practice today. So there’s a lot of that literature out there. But what I also saw was there was a lot of angst back in 2015, 2014. I actually had a personal experience in that realm where outside of my South by Southwest talks there was an anti-AI protest.

So just watching those protesters and seeing what their concerns were, I felt that a lot of the sort of philosophical questions, existential questions around the advent of AI, if AI indeed ends up being like Commander Data, it has sentience, it becomes artificial general intelligence, then it will be able to do the jobs better than we can and it will be more capable in let’s say the ‘art of war’ than we are and therefore does this mean that we will lose our jobs. We will be meaningless and our lives will be lacking in meaning and maybe the AI will kill us?

These are the kinds of concerns that people have had around AI and I wanted to sort of reflect on notions of man’s ability to create—the aspects around that that are embedded in our historical and religious tradition and what our conception of Man vs. he who can create, our creator—what those are and how that influences how we see this age of AI where man might be empowered to create something which can in turn create, which can in turn think.

There’s a lot of folks also that feel that this is far away, and I am an AI practitioner and I agree I don’t think that artificial general intelligence is around the corner. It’s not going to happen next May, even though I suppose some group could surprise us, but the likely outcome is that we are going to wait a few decades. I think waiting a few decades isn’t a big deal because in the grand scheme of things, in the history of the human race, what is a few decades? So ultimately the questions are still valid and this book was written to address some of those existential questions lurking in elements of philosophy, as well as science, as well as the reality of where AI stands at the moment.

So talk about those philosophical questions just broadly. What are those kinds of questions that will affect what happens with artificial intelligence?

Well I mean one question is a very simple one of self-worth. We tend to define ourselves by our capabilities and the jobs that we do. Many of our last names in many cultures are literally indicative of our profession. You know goldsmiths as an example, farmer as an example. And this is not just a European thing. Across the world you see this phenomenon of last names just reflecting the profession of a woman or a man. And it is to this extent that we internalize the jobs that we do as essentially being our identity, literally to the point where we take it on as a name.

So now when you de-link a man or a woman’s ability to produce or to engage in that particular labor that is a part of their identity, then what’s left? Are you still, the human that you were with that skill? Are you less of a human being? Is humanity in any way linked to your ability to conduct this kind of economic labor? And this is one question that I explored in the book because I don’t know whether people really contemplate this issue so directly and think about it in philosophical terms, but I do know that subjectively people get depressed when they’re confronted with the idea that they might not be able to do the job that they are comfortable doing or have been comfortable doing for decades. So at some level obviously it’s having an impact.

And the question then is: is our ability to perform a certain class of economic labor in any way intrinsically connected to identity? Is it part of humanity? And I sort of explore this concept and I say “OK well, let’s sort of take this away and let’s cut this away let’s take away all of the extra frills, let’s take away all of what is not absolutely fundamentally uniquely human.” And that was an interesting exercise for me. The conclusions that I came to—I don’t know whether I should spoil the book by sharing it here—but in a nutshell—this is no surprise—that our cognitive function, our higher order thinking, our creativity, these are the things which make us absolutely unique amongst the known creation. And it is that which makes us unique and different. So this is one question of self worth in the age of AI, and another one is…

Just to put a pin in that for a moment, in the United States the workforce participation rate is only about 50% to begin with, so only about 50% of people work because you’ve got adults that are retired, you have people who are unable to work, you have people that are independently wealthy… I mean we already had like half of adults not working. Does it does it really rise to the level of a philosophical question when it’s already something we have thousands of years of history with? Like what are the really needy things that AI gets at? For instance, do you think a machine can be creative?

Absolutely I think the machine can be creative.

You think people are machines?

I do think people are machines.

So then if that’s the case, how do you explain things like the mind? How do you think about consciousness? We don’t just measure temperature, we can feel warmth, we have a first person experience of the universe. How can a machine experience the world?

Well you know look there’s this age old discussion about qualia and there’s this discussion about the subjective experience, and obviously that’s linked to consciousness because that kind of subjective experience requires you to first know of your own existence and then apply the feeling of that experience to you in your mind. Essentially you are simulating not only the world but you also have a model of yourself. And ultimately in my view consciousness is an emergent phenomenon.

You know the very famous Marvin Minsky hypothesis of The Society of Mind. And in all of its details I don’t know that I agree with every last bit of it, but the basic concept is that there are a large number of processes that are specialized in different things that are running in the mind, the software being the mind, and the hardware being the brain, and that the complex interactions of a lot of these things result in something that looks very different from any one of these processes independently. This in general is a phenomenon that’s called emergence. It exists in nature and it also exists in computers.

One of the first few graphical programs that I wrote as a child in basic [coding] was drawing straight lines, and yet on a CRT display, what I actually saw were curves. I’d never drawn curves but it turns out that when you light a large number of pixels with a certain gap in the middle and it’s on a CRT display there there are all sorts of effects and interactions like the Moire effect and so on and so forth where what you thought you were drawing was lines, and it shows up if you look at it from an angle, as curves.

So I mean the process of drawing a line is nothing like drawing a curve, there was no active intent or design to produce a curve, the curve just shows up. It’s a very simple example of a kid writing a few lines of basic can do this experiment and look at this but there are obviously more complex examples of emergence as well. And so consciousness to me is an emergent property, it’s an emergent phenomenon. It’s not about the one thing.

I don’t think there is a consciousness gland. I think that there are a large number of processes that interact to produce this consciousness. And what does that require? It requires for example a complex simulation capability which the human brain has, the ability to think about time, to think about objects, model them and to also apply your knowledge of physical forces and other phenomena within your brain to try and figure out where things are going.

So that simulation capability is very important, and then the other capability that’s important is the ability to model yourself. So when you model yourself and you put yourself in a simulator and you see all these different things happening, there is not the real pain that you experience when you simulate for example being struck by an arrow, but there might be some fear and a why is that fear emanating? It’s because you watch your own model in your imagination, in your simulation suffer some sort of a problem. And now that is a very internal. Right? None of this has happened in the external world but you’re conscious of this happening, so to me at the end of the day it has some fundamental requirements. I believe simulation and self modeling are two of those requirements, but ultimately it’s an emergent property.

Listen to this one-hour episode or read the full transcript at www.VoicesinAI.com

[voices_in_ai_link_back]

Byron explores issues around artificial intelligence and conscious computers in his new book The Fourth Age: Smart Robots, Conscious Computers, and the Future of Humanity.

Read Full Article
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

Read Full Article
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

In the era of data lakes and digital transformation, Data catalogs are taking on new functionality and revitalized importance. You’d be hard put to manage your data landscape responsibly without a data catalog, but just because catalogs are needed doesn’t mean they are just a dry requirement. Done right, data catalogs can make self-service analytics across the Enterprise far more productive, widely-adopted and — dare we say it — fun.

Data catalogs can facilitate frictionless discoverability, fostering the user enthusiasm needed for a robust data culture. They help users find the data they want and discover data they may not have known was available. Machine learning/AI-driven recommendations help users too, especially when they’re security-aware and based on what other users have found useful.

People are an essential part of successful data catalog implementations, and social collaboration features help spread analytics enthusiasm. Peer endorsement of data sets, user-contributed documentation and peer-to-peer communication are all important; and motivate users toward successful adoption.

To learn more about the benefits of grassroots data catalog implementations and how to get teams excited to join in on all the fun, join us for this free 1-hour webinar from GigaOm Research.  The Webinar features GigaOm analyst Andrew Brust and special guest, Aaron Kalb, Co-founder & VP of Design and Strategic Initiatives from Alation, a frontrunner in the data catalog arena.

In this 1-hour webinar, you will discover:

  • Why data catalogs have emerged so prominently in the data lake/BI/analytics world
  • How data catalogs can be user-positive, not just tools for IT
  • The role of machine learning and AI in data catalog success
  • Where governed personally identifiable information (PII) and business glossary connectivity tie in

Register now to join GigaOm and Alation for this free expert webinar.

Who Should Attend:

  • CTOs
  • Chief Data Officers
  • Business Analysts
  • Data Analysts
  • Data Engineers
  • Data Stewards
  • Database Administrators (DBAs)
Read Full Article
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

Read Full Article
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

[voices_in_ai_byline]

About this Episode

Episode 84 of Voices in AI features host Byron Reese and David Cox discuss classifications of AI, and how the research has been evolving and growing

Listen to this one-hour episode or read the full transcript at www.VoicesinAI.com

Transcript Excerpt

Byron Reese: This is Voices in AI, brought to you by GigaOm and I’m Byron Reese. I’m so excited about today’s show. Today we have David Cox. He is the Director of the MIT IBM Watson AI Lab, which is part of IBM Research. Before that he spent 11 years teaching at Harvard, interestingly in the Life Sciences. He holds an AB degree from Harvard in Biology and Psychology, and he holds a PhD in Neuroscience from MIT. Welcome to the show David!

David Cox: Thanks. It’s a great pleasure to be here.

I always like to start with my Rorschach question which is, “What is intelligence and why is Artificial Intelligence artificial?” And you’re a neuroscientist and a psychologist and a biologist, so how do you think of intelligence?

That’s a great question. I think we don’t necessarily need to have just one definition. I think people get hung up on the words, but at the end of the day, what makes us intelligent, what makes other organisms on this planet intelligent is the ability to absorb information about the environment, to build models of what’s going to happen next, to predict and then to make actions that help achieve whatever goal you’re trying to achieve. And when you look at it that way that’s a pretty broad definition.

Some people are purists and they want to say this is AI, but this other thing is just statistics or regression or if-then-else loops. At the end of the day, what we’re about is we’re trying to make machines that can make decisions the way we do and sometimes our decisions are very complicated. Sometimes our decisions are less complicated, but it really is about how do we model the world, how do we take actions that really drive us forward?

It’s funny, the AI word too. I’m a recovering academic as you said. I was at Harvard for many years and I think as a field, we were really uncomfortable with the term ‘AI.’ so, we desperately wanted to call it anything else. In 2017 and before we wanted to call it ‘machine learning’ or we wanted to call it ‘deep learning’ [to] be more specific. But in 2018 for whatever reason, we all just gave up and we just embraced this term ‘AI.’ In some ways I think it’s healthy. But when I joined IBM I was actually really pleasantly surprised by some framing that the company had done.

IBM does this thing called the Global Technology Outlook or GTO which happens every year and the company tries to collectively figure out—research plays a very big part of this—we try to figure out ‘What does the future look like?’ And they came up with this framing that I really like for AI. They did something extremely simple. They just put some adjectives in front of AI and I think it clarifies the debate a lot.

So basically, what we have today like deep learning, machine learning, tremendously powerful technologies are going to disrupt a lot of things. We call those Narrow AI and I think that narrow framing really calls attention to the ways in which even if it’s powerful, it’s fundamentally limited. And then on the other end of the spectrum we have General AI.  This is a term that’s been around for a long time, this idea of systems that can decide what they want to do for themselves that are broadly autonomous and that’s fine. Those are really interesting discussions to have but we’re not there as a field yet.

In the middle and I think this is really where the interesting stroke is, there’s this notion we have a Broad AI and I think that’s really where the stakes are today. How do we have systems that are able to go beyond what we have that’s narrow without necessarily getting hung up on all these notions of what ‘General Intelligence’ might be. So things like having systems that are that are interpretable, having systems that can work with different kinds of data that can integrate knowledge from other sources, that’s sort of the domain of Broad AI. Broad Intelligence is really what the lab I lead is all about.

There’s a lot in there and I agree with you. I’m not really that interested in that low end and what’s the lowest bar in AI. What makes the question interesting to me is really the mechanism by which we are intelligent, whatever that is, and does that intelligence require a mechanistic reductionist view of the world? In other words, is that something that you believe we’re going to be able to duplicate either… in terms of its function, or are we going to be able to build machines that are as versatile as a human in intelligence, and creative and would have emotions and all of the rest, or is that an open question?

I have no doubt that we’re going to eventually, as a human race be able to figure out how to build intelligent systems that are just as intelligent as we are. I think in some of these things, we tend to think about how we’re different from other kinds of intelligences on Earth. We do things like… there was a period of time where we wanted to distinguish ourselves from the animals and we thought of reason, the ability to reason and do things like mathematics and abstract logic was what was uniquely human about us.

And then, computers came along and all of a sudden, computers can actually do some of those things better than we can even in arithmetic and solving complex logic problems or math problems. Then we move towards thinking that maybe it’s emotion. Maybe emotion is what makes us uniquely human and rational. It was a kind of narcissism I think to our own view which is understandable and justifiable. How are we special in this world?

But I think in many ways we’re going to end up having systems that do have something like emotion. Even you look at reinforcement learning—those systems have a notion of reward. I don’t think it’s such a far reach to think maybe we’ll even in a sci-fi world have machines that have senses of pleasure and hopes and ambitions and things like that.

At the end of day, our brains are computers. I think that’s sometimes a controversial statement but it’s one that I think is well-grounded. It’s a very sophisticated computer. It happens to be made out of biological materials. But at the end of the day, it’s a tremendously efficient, tremendously powerful, tremendously parallel nanoscale biological computer. These are like biological nanotechnology. And to the extent that it is a computer and to think to the extent that we can agree on that, Computer Science gives us equivalencies. We can build a computer with different hardware. We don’t have to emulate the hardware. We don’t have to slavishly copy the brain, but it is sort of a given that will eventually be able to do everything the brain does in a computer. Now of course all that’s all farther off, I think. Those are not the stakes—those aren’t the battlefronts that we’re working on today. But I think the sky’s the limit in terms of where AI can go.

You mentioned Narrow and General AI, and this classification you’re putting in between them is broad, and I have an opinion and I’m curious of what you think. At least with regards to Narrow and General they are not on a continuum. They’re actually unrelated technologies. Would you agree with that or not?

Would you say like that a narrow (AI) gets a little better then a little better, a little better, a little better, a little better, then, ta-da! One day it can compose a Hamilton, or do you think that they may be completely unrelated? That this model of, ‘Hey let’s take a lot of data about the past and let’s study it very carefully to learn to do one thing’ is very different than whatever General Intelligence is going to be.

There’s this idea that if you want to go to the moon, one way to go to the moon—to get closer to the moon—is to climb the mountain.

Right. Exactly.

And you’ll get closer, but you’re not on the right path. And, maybe you’d be better off on top of a building or a little rocket and maybe go as high as the tree or as high as the mountain, but it’ll get you where you need to go. I do think there is a strong flavor of that with today’s AI.

And in today’s AI, if we’re plain about things, is deep learning. This model… what’s really been successful in deep learning is supervised learning. We train a model to do every part of seeing based on classifying objects and you classify a lot – many images, you have lots of training data and you build a statistical model. And that’s everything the model has ever seen. It has to learn from those images and from that task.

And we’re starting to see that actually the solutions you get—again, they are tremendously useful, but they do have a little bit of that quality of climbing a tree or climbing a mountain. There’s a bunch of recent work suggesting… basically they’re looking at texture, so a lot of solution for supervision is looking at the rough texture.

There are also some wonderful examples where you take a captioning system—a system can take an image and produce a caption. You can produce wonderful captions in cases where the images look like the ones it was trained on, but you show it anything just a little bit weird like an airplane that’s about to crash or a family fleeing their home on a flooding beach and it’ll produce things like an airplane is on the tarmac at an airport or a family is standing on a beach. It’s like they kind of missed the point, like it was able to do something because it learned correlations between the inputs it was given and the outputs that we asked it for, but it didn’t have a deep understanding. And I think that’s the crux of what you’re getting at and I agree at least in part.

So with Broad, the way you’re thinking of it, it sounds to me just from the few words you said, it’s an incremental improvement over Narrow. It’s not a junior version of General AI. Would you agree with that? You’re basically taking techniques we have and just doing them bigger and more expansively and smarter and better, or is that not the case?

No. When we think about Broad AI, we really are thinking about a little bit ‘press the reset button, don’t throw away things that work.’ Deep learning is a set of tools which is tremendously powerful, and we’d be kind of foolish to throw them away. But when we think about Broad AI, what we’re really getting at is how do we start to make contact with that deep structure in the world… like commonsense.

We have all kinds of common sense. When I look at a scene I look at the desk in front of me, I didn’t learn to do tasks that have to do with the desk in front of me by lots and lots of labeled examples or even many, many trials in a reinforcement learning kind of setup. I know things about the world – simple things. And things we take for granted like I know that my desk is probably made of wood and I know that wood is a solid, and solids can’t pass through other solids. And I know that it’s probably flat, and if I put my hand out I would be able to orient it in a position that would be appropriate to hover above it…

There are all these affordances and all this super simple commonsense stuff that you don’t get when you just do brute force statistical learning. When we think about Broad AI, we’re really thinking about is ‘How do we infuse that knowledge, that understanding and that commonsense?’ And one area that we’re excited about and that we’re working on here at the MIT IBM Lab is this idea of neuro-symbolic hybrids.

So again, this is in the spirit of ‘don’t throw away neural-networks.’ They’re wonderful in extracting certain kinds of statistical structure from the world – convolutional neural network does wonderful job of extracting information from an image. LSDMs and recurrent neural networks do a wonderful job of extracting structure from natural language, but building in symbolic systems as first-class citizens in a hybrid system that combines those all together.

Some of the work we’re doing now is building systems where we use neural networks to extract structure from these noisy, messy inputs of vision and different modalities but then actually having symbolic AI systems. Symbolic AI systems have been around basically contemporaneous with neural networks. They’ve been ‘in the wings’ all this time. Neural networks deep learning is in any way… everyone knows this is a rebrand of the neural networks from the 1980s that are suddenly powerful again. They’re powerful for the first time because we have enough data and we have enough compute.

I think in many ways a lot of the symbolic ideas, sort of logical operations, planning, things like that. They’re also very powerful techniques, but they haven’t really been able to shine yet partly because they’ve been waiting for something—just the way that neural networks were waiting for compute and data to come along. I think in many ways some of these symbolic techniques have been waiting for neural networks to come along—because neural networks can kind of bridge that [gap] from the messiness of the signals coming in to this sort of symbolic regime where we can start to actually work. One of things we’re really excited about is building these systems that can bridge across that gap.

Listen to this one-hour episode or read the full transcript at www.VoicesinAI.com

[voices_in_ai_link_back]

Byron explores issues around artificial intelligence and conscious computers in his new book The Fourth Age: Smart Robots, Conscious Computers, and the Future of Humanity.

Read Full Article
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 
Gigaom » Big Data by Jp Morgenthal - 2M ago

Adoption and use of an API are heavily dependent upon the consistency, reliability, and intuitiveness of a given API. It is a delicate balance. If the API is too difficult to use or the results are inconsistent across uses, its value will be limited and, hence, adoption will be low. Our research has found that many enterprises have focused on the technical requirements to deliver and scale and API, but have not focused on the corresponding non-functional requirements of a comprehensive API strategy that includes support, service level management, ecosystem development and usability in the face of legacy integration or stream large amounts of data in response to a request.

Key Findings:

  • Enterprises are creating APIs at an incredibly fast pace often without paying attention to the implications for operational overhead or a comprehensive strategy that goes beyond the technology for building and deployment.
  • A successful API strategy must include provisions for developer onboarding and support, clear policies related to service levels and use, the ability to operate in a hybrid cloud model, and a proactive monetization scheme. With regard to the latter primarily focused on support for various sizing approaches and scalability.
  • There is still a strong requirement to “roll your own” API infrastructure to support a comprehensive API strategy. While many of today’s API gateway and management products to address the basics, businesses will still need to invest in building a developer portal, providing support, growing a community and addressing many of the operational issues. Microservices architecture is helping to alleviate some of the overhead associated with deployment and scalability by packaging APIs into manageable entities.
Read Full Article

Read for later

Articles marked as Favorite are saved for later viewing.
close
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

Separate tags by commas
To access this feature, please upgrade your account.
Start your free month
Free Preview