Loading...

Follow Gigaom » Big Data on Feedspot

Continue with Google
Continue with Facebook
or

Valid

[voices_in_ai_byline]

About this Episode

Episode 87 of Voices in AI features Byron speaking with Sameer Maskey of Fusemachines about the development of machine learning, languages and AI capabilities.

Listen to this one-hour episode or read the full transcript at www.VoicesinAI.com

Transcript Excerpt

Byron Reese: This is Voices in AI brought to you by GigaOm and I’m Byron Reese. Today my guest is Sameer Maskey. He is the founder and CEO of Fusemachines and he’s an adjunct assistant professor at Columbia. He holds an undergraduate degree in Math and Physics from Bates College and a PhD in Computer Science from Columbia University as well. Welcome to the show, Sameer.

Sameer Maskey: Thanks Byron, glad to be here.

Can you recall the first time you ever heard the term ‘artificial intelligence’ or has it always just been kind of a fixture of your life?

It’s always been a fixture of my life. But the first time I heard about it in the way it is understood in today’s world of what AI is, was in my first year undergrad when I was thinking of building talking machines. That was my dream, building a machine that can sort of converse with you. And in doing that research I happened to run into several books on AI and particularly a book called Voice and Speech Synthesis, and that’s how my journey in AI came into fruition.

So a conversational AI, it sounds like something that I mean I assume early on you heard about the Turing Test and thought ‘I wonder how you would build a device that could pass that.’ Is that fair to say?

Yeah, I’d heard about Turing test but my interest stemmed from being able to build a machine that could just talk, read a book and then talk with you about it. And I was particularly interested on being able to build the machine in Nepal. So I grew up in Nepal and I was always interested in building machines that can talk Nepali. So more than the Turing Test was just this notion of ‘can we build a machine that can talk in Nepali and converse with you?’

Would that require a general intelligence or are not anywhere near a general intelligence? For it to be able to like read a book and then have a conversation with you about The Great Gatsby or whatever. Would that require general intelligence?

Being able to build a machine that can read a book and then just talk about it would require I guess what is being termed as artificial general intelligence. That begs many different other kinds of question of what AGI is and how it’s different from AI in what form. But we are still quite far ways from being able to build a machine that can just read a novel or a history book and then just be able to sit down with you and discuss it. I think we are quite far away from it even though there’s a lot of research being done from a conversational AI perspective.

Yeah I mean the minute a computer can learn something, you can just point it at the Internet and say “go learn everything” right?

Exactly. And we’re not there, at all.

Pedro Domingo wrote a book called The Master Algorithm. He said he believes there is like some uber algorithm yet we haven’t discovered which accounts for intelligence in all of its variants, and part of the reason he believes that is, we’re made with shockingly little code DNA. And the amount of that code which is different than a chimp, say, you may only be six or seven mbps in that tiny bit of code. It doesn’t have intelligence obviously, but it knows how to build intelligence. So is it possible that… do you think that that level of artificial intelligence, whether you want to call it AGI or not but that level of AI, do you think that might be a really simple thing that we just haven’t… that’s like right in front of us and we can’t see it? Or do you think it’s going to be a long hard slog to finally get there and it’ll be a piece at a time?

To answer that question and to sort of be able to say maybe there is this Master Algorithm that’s just not discovered, I think it’s hard to make anything towards it, because we as a human being even neurologically and neuroscientists and so forth don’t even fully understand how all the pieces of the cognition work. Like how my four and a half year old kid is just able to learn from couple of different words and put together and start having conversations about it. So I think we don’t even understand how human brains work. I get a little nervous when people claim or suggest there’s this one master algorithm that’s just yet to be discovered.

We had this one trick that is working now where we take a bunch of data about the past and we study it with computers and we look for patterns, and we use those patterns to predict the future. And that’s kind of what we do. I mean that’s machine learning in a nutshell. And it’s hard for me for instance to see how will that ever write The Great Gatsby, let alone read it and understand it, but how could it ever be creative? But maybe it can be.

Through one lens, we’re not that far with AI and why do you think it’s turning out to be so hard? I guess that’s my question. Why is AI so hard? We’re intelligent and we can kind of reflect on our own intelligence and we kind of figure out how we learn things, but we have this brute force way of just cramming a bunch of data down the machine’s throat, and then it can spot spam email or route you through traffic and nothing else. So why is AI turning out to be so hard?

Because I think the machinery that’s been built over many, many years on how AI has evolved and is to a point right now, like you pointed out it is still a lot of systems looking at a lot of historical data, building models that figure out patterns on it and doing predictions on it and it requires a lot of data. And one of the reasons deep learning is working very well is there’s so much data right now.

We haven’t figured out how, with a very little bit of data you can create generalization on the patterns to be able to do things. And that piece on how to model or build a machine that can generalize decision making process based on just a few pieces of information… we haven’t figured that out. And until we figure that out, it is still going to be very hard to make AGI or a system that can just write The Great Gatsby. And I don’t know how long will it be until we figure that part out.

A lot of times people think that a general intelligence is just an evolutionary product from narrow. We get narrow then we get…First they can play Go and then it can play all games, all strategy games. And then it can do this and it gets better and better and then one day it’s general.

Is it possible that what we know how to do now has absolutely nothing to do with general intelligence? Like we haven’t even started working on that problem, it’s a completely different problem. All we’re able to do is make things that can fake intelligence, but we don’t know how to make anything that’s really intelligent. Or do you think we are on a path that’s going to just get better and better and better until one day we have something that can make coffee and play Go and compose sonnets?

There is some new research being done on AGI, but the path right now which is where we train more and more data on bigger and bigger architecture and sort of simulate our fake intelligence, I don’t think that would probably lead into solutions that can have general intelligence the way we are talking about. It is still a very similar model that we’ve been using before, and that’s been invented a long time ago.

They are much more popular right now because they can do more with more data with more compute power and so forth. So when it is able to drive a car based on computer vision and neural net and learning behind it, it simulates intelligence. But it’s not really probably the way we describe human intelligence, so that it can write books and write poetry. So are we on the path to AGI? I don’t think that with the current evolution of the way the machinery is done is probably going to lead you to AGI. There’s probably some fundamental new ways of exploring things that is required and how the problem is framed to sort of thinking about how general intelligence works.

Listen to this one-hour episode or read the full transcript at www.VoicesinAI.com

[voices_in_ai_link_back]

Byron explores issues around artificial intelligence and conscious computers in his new book The Fourth Age: Smart Robots, Conscious Computers, and the Future of Humanity.

Read Full Article
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

[voices_in_ai_byline]

About this Episode

Episode 86 of Voices in AI features Byron speaking with fellow author Amir Husain about the nature of Artificial Intelligence and Amir’s book The Sentient Machine.

Listen to this one-hour episode or read the full transcript at www.VoicesinAI.com

Transcript Excerpt

Byron Reese: This is Voices in AI brought to you by GigaOm, and I’m Byron Reese. Today my guest is Amir Husain. He is the founder and CEO of SparkCognition Inc., and he’s the author of The Sentient Machine, a fine book about artificial intelligence. In addition to that, he is a member of the AI task force with the Center for New American Security. He is a member of the board of advisors at UT Austin’s Department of Computer Science. He’s a member of the Council on Foreign Relations. In short, he is a very busy guy, but has found 30 minutes to join us today. Welcome to the show, Amir.

Amir Husain: Thank you very much for having me Byron. It’s my pleasure.

You and I had a cup of coffee a while ago and you gave me a copy of your book and I’ve read it and really enjoyed it. Why don’t we start with the book. Talk about that a little bit and then we’ll talk about SparkCognition Inc. Why did you write The Sentient Machine: The Coming Age of Artificial Intelligence?

Byron, I wrote this book because I thought that there was a lot of writing on artificial intelligence—what it could be. There’s a lot of sci fi that has visions of artificial intelligence and there’s a lot of very technical material around where artificial intelligence is as a science and as a practice today. So there’s a lot of that literature out there. But what I also saw was there was a lot of angst back in 2015, 2014. I actually had a personal experience in that realm where outside of my South by Southwest talks there was an anti-AI protest.

So just watching those protesters and seeing what their concerns were, I felt that a lot of the sort of philosophical questions, existential questions around the advent of AI, if AI indeed ends up being like Commander Data, it has sentience, it becomes artificial general intelligence, then it will be able to do the jobs better than we can and it will be more capable in let’s say the ‘art of war’ than we are and therefore does this mean that we will lose our jobs. We will be meaningless and our lives will be lacking in meaning and maybe the AI will kill us?

These are the kinds of concerns that people have had around AI and I wanted to sort of reflect on notions of man’s ability to create—the aspects around that that are embedded in our historical and religious tradition and what our conception of Man vs. he who can create, our creator—what those are and how that influences how we see this age of AI where man might be empowered to create something which can in turn create, which can in turn think.

There’s a lot of folks also that feel that this is far away, and I am an AI practitioner and I agree I don’t think that artificial general intelligence is around the corner. It’s not going to happen next May, even though I suppose some group could surprise us, but the likely outcome is that we are going to wait a few decades. I think waiting a few decades isn’t a big deal because in the grand scheme of things, in the history of the human race, what is a few decades? So ultimately the questions are still valid and this book was written to address some of those existential questions lurking in elements of philosophy, as well as science, as well as the reality of where AI stands at the moment.

So talk about those philosophical questions just broadly. What are those kinds of questions that will affect what happens with artificial intelligence?

Well I mean one question is a very simple one of self-worth. We tend to define ourselves by our capabilities and the jobs that we do. Many of our last names in many cultures are literally indicative of our profession. You know goldsmiths as an example, farmer as an example. And this is not just a European thing. Across the world you see this phenomenon of last names just reflecting the profession of a woman or a man. And it is to this extent that we internalize the jobs that we do as essentially being our identity, literally to the point where we take it on as a name.

So now when you de-link a man or a woman’s ability to produce or to engage in that particular labor that is a part of their identity, then what’s left? Are you still, the human that you were with that skill? Are you less of a human being? Is humanity in any way linked to your ability to conduct this kind of economic labor? And this is one question that I explored in the book because I don’t know whether people really contemplate this issue so directly and think about it in philosophical terms, but I do know that subjectively people get depressed when they’re confronted with the idea that they might not be able to do the job that they are comfortable doing or have been comfortable doing for decades. So at some level obviously it’s having an impact.

And the question then is: is our ability to perform a certain class of economic labor in any way intrinsically connected to identity? Is it part of humanity? And I sort of explore this concept and I say “OK well, let’s sort of take this away and let’s cut this away let’s take away all of the extra frills, let’s take away all of what is not absolutely fundamentally uniquely human.” And that was an interesting exercise for me. The conclusions that I came to—I don’t know whether I should spoil the book by sharing it here—but in a nutshell—this is no surprise—that our cognitive function, our higher order thinking, our creativity, these are the things which make us absolutely unique amongst the known creation. And it is that which makes us unique and different. So this is one question of self worth in the age of AI, and another one is…

Just to put a pin in that for a moment, in the United States the workforce participation rate is only about 50% to begin with, so only about 50% of people work because you’ve got adults that are retired, you have people who are unable to work, you have people that are independently wealthy… I mean we already had like half of adults not working. Does it does it really rise to the level of a philosophical question when it’s already something we have thousands of years of history with? Like what are the really needy things that AI gets at? For instance, do you think a machine can be creative?

Absolutely I think the machine can be creative.

You think people are machines?

I do think people are machines.

So then if that’s the case, how do you explain things like the mind? How do you think about consciousness? We don’t just measure temperature, we can feel warmth, we have a first person experience of the universe. How can a machine experience the world?

Well you know look there’s this age old discussion about qualia and there’s this discussion about the subjective experience, and obviously that’s linked to consciousness because that kind of subjective experience requires you to first know of your own existence and then apply the feeling of that experience to you in your mind. Essentially you are simulating not only the world but you also have a model of yourself. And ultimately in my view consciousness is an emergent phenomenon.

You know the very famous Marvin Minsky hypothesis of The Society of Mind. And in all of its details I don’t know that I agree with every last bit of it, but the basic concept is that there are a large number of processes that are specialized in different things that are running in the mind, the software being the mind, and the hardware being the brain, and that the complex interactions of a lot of these things result in something that looks very different from any one of these processes independently. This in general is a phenomenon that’s called emergence. It exists in nature and it also exists in computers.

One of the first few graphical programs that I wrote as a child in basic [coding] was drawing straight lines, and yet on a CRT display, what I actually saw were curves. I’d never drawn curves but it turns out that when you light a large number of pixels with a certain gap in the middle and it’s on a CRT display there there are all sorts of effects and interactions like the Moire effect and so on and so forth where what you thought you were drawing was lines, and it shows up if you look at it from an angle, as curves.

So I mean the process of drawing a line is nothing like drawing a curve, there was no active intent or design to produce a curve, the curve just shows up. It’s a very simple example of a kid writing a few lines of basic can do this experiment and look at this but there are obviously more complex examples of emergence as well. And so consciousness to me is an emergent property, it’s an emergent phenomenon. It’s not about the one thing.

I don’t think there is a consciousness gland. I think that there are a large number of processes that interact to produce this consciousness. And what does that require? It requires for example a complex simulation capability which the human brain has, the ability to think about time, to think about objects, model them and to also apply your knowledge of physical forces and other phenomena within your brain to try and figure out where things are going.

So that simulation capability is very important, and then the other capability that’s important is the ability to model yourself. So when you model yourself and you put yourself in a simulator and you see all these different things happening, there is not the real pain that you experience when you simulate for example being struck by an arrow, but there might be some fear and a why is that fear emanating? It’s because you watch your own model in your imagination, in your simulation suffer some sort of a problem. And now that is a very internal. Right? None of this has happened in the external world but you’re conscious of this happening, so to me at the end of the day it has some fundamental requirements. I believe simulation and self modeling are two of those requirements, but ultimately it’s an emergent property.

Listen to this one-hour episode or read the full transcript at www.VoicesinAI.com

[voices_in_ai_link_back]

Byron explores issues around artificial intelligence and conscious computers in his new book The Fourth Age: Smart Robots, Conscious Computers, and the Future of Humanity.

Read Full Article
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

Read Full Article
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

In the era of data lakes and digital transformation, Data catalogs are taking on new functionality and revitalized importance. You’d be hard put to manage your data landscape responsibly without a data catalog, but just because catalogs are needed doesn’t mean they are just a dry requirement. Done right, data catalogs can make self-service analytics across the Enterprise far more productive, widely-adopted and — dare we say it — fun.

Data catalogs can facilitate frictionless discoverability, fostering the user enthusiasm needed for a robust data culture. They help users find the data they want and discover data they may not have known was available. Machine learning/AI-driven recommendations help users too, especially when they’re security-aware and based on what other users have found useful.

People are an essential part of successful data catalog implementations, and social collaboration features help spread analytics enthusiasm. Peer endorsement of data sets, user-contributed documentation and peer-to-peer communication are all important; and motivate users toward successful adoption.

To learn more about the benefits of grassroots data catalog implementations and how to get teams excited to join in on all the fun, join us for this free 1-hour webinar from GigaOm Research.  The Webinar features GigaOm analyst Andrew Brust and special guest, Aaron Kalb, Co-founder & VP of Design and Strategic Initiatives from Alation, a frontrunner in the data catalog arena.

In this 1-hour webinar, you will discover:

  • Why data catalogs have emerged so prominently in the data lake/BI/analytics world
  • How data catalogs can be user-positive, not just tools for IT
  • The role of machine learning and AI in data catalog success
  • Where governed personally identifiable information (PII) and business glossary connectivity tie in

Register now to join GigaOm and Alation for this free expert webinar.

Who Should Attend:

  • CTOs
  • Chief Data Officers
  • Business Analysts
  • Data Analysts
  • Data Engineers
  • Data Stewards
  • Database Administrators (DBAs)
Read Full Article
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

Read Full Article
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

[voices_in_ai_byline]

About this Episode

Episode 84 of Voices in AI features host Byron Reese and David Cox discuss classifications of AI, and how the research has been evolving and growing

Listen to this one-hour episode or read the full transcript at www.VoicesinAI.com

Transcript Excerpt

Byron Reese: This is Voices in AI, brought to you by GigaOm and I’m Byron Reese. I’m so excited about today’s show. Today we have David Cox. He is the Director of the MIT IBM Watson AI Lab, which is part of IBM Research. Before that he spent 11 years teaching at Harvard, interestingly in the Life Sciences. He holds an AB degree from Harvard in Biology and Psychology, and he holds a PhD in Neuroscience from MIT. Welcome to the show David!

David Cox: Thanks. It’s a great pleasure to be here.

I always like to start with my Rorschach question which is, “What is intelligence and why is Artificial Intelligence artificial?” And you’re a neuroscientist and a psychologist and a biologist, so how do you think of intelligence?

That’s a great question. I think we don’t necessarily need to have just one definition. I think people get hung up on the words, but at the end of the day, what makes us intelligent, what makes other organisms on this planet intelligent is the ability to absorb information about the environment, to build models of what’s going to happen next, to predict and then to make actions that help achieve whatever goal you’re trying to achieve. And when you look at it that way that’s a pretty broad definition.

Some people are purists and they want to say this is AI, but this other thing is just statistics or regression or if-then-else loops. At the end of the day, what we’re about is we’re trying to make machines that can make decisions the way we do and sometimes our decisions are very complicated. Sometimes our decisions are less complicated, but it really is about how do we model the world, how do we take actions that really drive us forward?

It’s funny, the AI word too. I’m a recovering academic as you said. I was at Harvard for many years and I think as a field, we were really uncomfortable with the term ‘AI.’ so, we desperately wanted to call it anything else. In 2017 and before we wanted to call it ‘machine learning’ or we wanted to call it ‘deep learning’ [to] be more specific. But in 2018 for whatever reason, we all just gave up and we just embraced this term ‘AI.’ In some ways I think it’s healthy. But when I joined IBM I was actually really pleasantly surprised by some framing that the company had done.

IBM does this thing called the Global Technology Outlook or GTO which happens every year and the company tries to collectively figure out—research plays a very big part of this—we try to figure out ‘What does the future look like?’ And they came up with this framing that I really like for AI. They did something extremely simple. They just put some adjectives in front of AI and I think it clarifies the debate a lot.

So basically, what we have today like deep learning, machine learning, tremendously powerful technologies are going to disrupt a lot of things. We call those Narrow AI and I think that narrow framing really calls attention to the ways in which even if it’s powerful, it’s fundamentally limited. And then on the other end of the spectrum we have General AI.  This is a term that’s been around for a long time, this idea of systems that can decide what they want to do for themselves that are broadly autonomous and that’s fine. Those are really interesting discussions to have but we’re not there as a field yet.

In the middle and I think this is really where the interesting stroke is, there’s this notion we have a Broad AI and I think that’s really where the stakes are today. How do we have systems that are able to go beyond what we have that’s narrow without necessarily getting hung up on all these notions of what ‘General Intelligence’ might be. So things like having systems that are that are interpretable, having systems that can work with different kinds of data that can integrate knowledge from other sources, that’s sort of the domain of Broad AI. Broad Intelligence is really what the lab I lead is all about.

There’s a lot in there and I agree with you. I’m not really that interested in that low end and what’s the lowest bar in AI. What makes the question interesting to me is really the mechanism by which we are intelligent, whatever that is, and does that intelligence require a mechanistic reductionist view of the world? In other words, is that something that you believe we’re going to be able to duplicate either… in terms of its function, or are we going to be able to build machines that are as versatile as a human in intelligence, and creative and would have emotions and all of the rest, or is that an open question?

I have no doubt that we’re going to eventually, as a human race be able to figure out how to build intelligent systems that are just as intelligent as we are. I think in some of these things, we tend to think about how we’re different from other kinds of intelligences on Earth. We do things like… there was a period of time where we wanted to distinguish ourselves from the animals and we thought of reason, the ability to reason and do things like mathematics and abstract logic was what was uniquely human about us.

And then, computers came along and all of a sudden, computers can actually do some of those things better than we can even in arithmetic and solving complex logic problems or math problems. Then we move towards thinking that maybe it’s emotion. Maybe emotion is what makes us uniquely human and rational. It was a kind of narcissism I think to our own view which is understandable and justifiable. How are we special in this world?

But I think in many ways we’re going to end up having systems that do have something like emotion. Even you look at reinforcement learning—those systems have a notion of reward. I don’t think it’s such a far reach to think maybe we’ll even in a sci-fi world have machines that have senses of pleasure and hopes and ambitions and things like that.

At the end of day, our brains are computers. I think that’s sometimes a controversial statement but it’s one that I think is well-grounded. It’s a very sophisticated computer. It happens to be made out of biological materials. But at the end of the day, it’s a tremendously efficient, tremendously powerful, tremendously parallel nanoscale biological computer. These are like biological nanotechnology. And to the extent that it is a computer and to think to the extent that we can agree on that, Computer Science gives us equivalencies. We can build a computer with different hardware. We don’t have to emulate the hardware. We don’t have to slavishly copy the brain, but it is sort of a given that will eventually be able to do everything the brain does in a computer. Now of course all that’s all farther off, I think. Those are not the stakes—those aren’t the battlefronts that we’re working on today. But I think the sky’s the limit in terms of where AI can go.

You mentioned Narrow and General AI, and this classification you’re putting in between them is broad, and I have an opinion and I’m curious of what you think. At least with regards to Narrow and General they are not on a continuum. They’re actually unrelated technologies. Would you agree with that or not?

Would you say like that a narrow (AI) gets a little better then a little better, a little better, a little better, a little better, then, ta-da! One day it can compose a Hamilton, or do you think that they may be completely unrelated? That this model of, ‘Hey let’s take a lot of data about the past and let’s study it very carefully to learn to do one thing’ is very different than whatever General Intelligence is going to be.

There’s this idea that if you want to go to the moon, one way to go to the moon—to get closer to the moon—is to climb the mountain.

Right. Exactly.

And you’ll get closer, but you’re not on the right path. And, maybe you’d be better off on top of a building or a little rocket and maybe go as high as the tree or as high as the mountain, but it’ll get you where you need to go. I do think there is a strong flavor of that with today’s AI.

And in today’s AI, if we’re plain about things, is deep learning. This model… what’s really been successful in deep learning is supervised learning. We train a model to do every part of seeing based on classifying objects and you classify a lot – many images, you have lots of training data and you build a statistical model. And that’s everything the model has ever seen. It has to learn from those images and from that task.

And we’re starting to see that actually the solutions you get—again, they are tremendously useful, but they do have a little bit of that quality of climbing a tree or climbing a mountain. There’s a bunch of recent work suggesting… basically they’re looking at texture, so a lot of solution for supervision is looking at the rough texture.

There are also some wonderful examples where you take a captioning system—a system can take an image and produce a caption. You can produce wonderful captions in cases where the images look like the ones it was trained on, but you show it anything just a little bit weird like an airplane that’s about to crash or a family fleeing their home on a flooding beach and it’ll produce things like an airplane is on the tarmac at an airport or a family is standing on a beach. It’s like they kind of missed the point, like it was able to do something because it learned correlations between the inputs it was given and the outputs that we asked it for, but it didn’t have a deep understanding. And I think that’s the crux of what you’re getting at and I agree at least in part.

So with Broad, the way you’re thinking of it, it sounds to me just from the few words you said, it’s an incremental improvement over Narrow. It’s not a junior version of General AI. Would you agree with that? You’re basically taking techniques we have and just doing them bigger and more expansively and smarter and better, or is that not the case?

No. When we think about Broad AI, we really are thinking about a little bit ‘press the reset button, don’t throw away things that work.’ Deep learning is a set of tools which is tremendously powerful, and we’d be kind of foolish to throw them away. But when we think about Broad AI, what we’re really getting at is how do we start to make contact with that deep structure in the world… like commonsense.

We have all kinds of common sense. When I look at a scene I look at the desk in front of me, I didn’t learn to do tasks that have to do with the desk in front of me by lots and lots of labeled examples or even many, many trials in a reinforcement learning kind of setup. I know things about the world – simple things. And things we take for granted like I know that my desk is probably made of wood and I know that wood is a solid, and solids can’t pass through other solids. And I know that it’s probably flat, and if I put my hand out I would be able to orient it in a position that would be appropriate to hover above it…

There are all these affordances and all this super simple commonsense stuff that you don’t get when you just do brute force statistical learning. When we think about Broad AI, we’re really thinking about is ‘How do we infuse that knowledge, that understanding and that commonsense?’ And one area that we’re excited about and that we’re working on here at the MIT IBM Lab is this idea of neuro-symbolic hybrids.

So again, this is in the spirit of ‘don’t throw away neural-networks.’ They’re wonderful in extracting certain kinds of statistical structure from the world – convolutional neural network does wonderful job of extracting information from an image. LSDMs and recurrent neural networks do a wonderful job of extracting structure from natural language, but building in symbolic systems as first-class citizens in a hybrid system that combines those all together.

Some of the work we’re doing now is building systems where we use neural networks to extract structure from these noisy, messy inputs of vision and different modalities but then actually having symbolic AI systems. Symbolic AI systems have been around basically contemporaneous with neural networks. They’ve been ‘in the wings’ all this time. Neural networks deep learning is in any way… everyone knows this is a rebrand of the neural networks from the 1980s that are suddenly powerful again. They’re powerful for the first time because we have enough data and we have enough compute.

I think in many ways a lot of the symbolic ideas, sort of logical operations, planning, things like that. They’re also very powerful techniques, but they haven’t really been able to shine yet partly because they’ve been waiting for something—just the way that neural networks were waiting for compute and data to come along. I think in many ways some of these symbolic techniques have been waiting for neural networks to come along—because neural networks can kind of bridge that [gap] from the messiness of the signals coming in to this sort of symbolic regime where we can start to actually work. One of things we’re really excited about is building these systems that can bridge across that gap.

Listen to this one-hour episode or read the full transcript at www.VoicesinAI.com

[voices_in_ai_link_back]

Byron explores issues around artificial intelligence and conscious computers in his new book The Fourth Age: Smart Robots, Conscious Computers, and the Future of Humanity.

Read Full Article
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 
Gigaom » Big Data by Jp Morgenthal - 1M ago

Adoption and use of an API are heavily dependent upon the consistency, reliability, and intuitiveness of a given API. It is a delicate balance. If the API is too difficult to use or the results are inconsistent across uses, its value will be limited and, hence, adoption will be low. Our research has found that many enterprises have focused on the technical requirements to deliver and scale and API, but have not focused on the corresponding non-functional requirements of a comprehensive API strategy that includes support, service level management, ecosystem development and usability in the face of legacy integration or stream large amounts of data in response to a request.

Key Findings:

  • Enterprises are creating APIs at an incredibly fast pace often without paying attention to the implications for operational overhead or a comprehensive strategy that goes beyond the technology for building and deployment.
  • A successful API strategy must include provisions for developer onboarding and support, clear policies related to service levels and use, the ability to operate in a hybrid cloud model, and a proactive monetization scheme. With regard to the latter primarily focused on support for various sizing approaches and scalability.
  • There is still a strong requirement to “roll your own” API infrastructure to support a comprehensive API strategy. While many of today’s API gateway and management products to address the basics, businesses will still need to invest in building a developer portal, providing support, growing a community and addressing many of the operational issues. Microservices architecture is helping to alleviate some of the overhead associated with deployment and scalability by packaging APIs into manageable entities.
Read Full Article
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

[voices_in_ai_byline]

About this Episode

Episode 83 of Voices in AI features host Byron Reese and Margaret Mitchell discussing the nature of language and it’s impact on machine learning and intelligence.

Listen to this one-hour episode or read the full transcript at www.VoicesinAI.com

Transcript Excerpt

Byron Reese: This is Voices in AI brought to you by GigaOm and I’m Byron Reese. Today my guest is Margaret Mitchell. She is a senior research scientist at Google doing amazing work. And she studied linguistics at Reed College and Computational Linguistics at the University of Washington. Welcome to the show!

Margaret Mitchell: Thank you. Thank you for having me.

I’m always intrigued by how people make their way to the AI world, because a lot of times what they study in University [is so varied]. I’ve seen neuroscientists, I see physicists, I see all kinds of backgrounds. [It’s] like all roads lead to Rome. What was the path that got you from linguistics to computational linguistics and to artificial intelligence?

So I followed a path similar to I think some other people who’ve had sort of linguistics training and then go into natural language processing which is sort of [the] applied field of AI, focusing specifically on processing and understanding text as well as generating. And so I had been kind of fascinated by noun phrases when I was an undergrad. So that’s things that refer to person, places, objects in the world and things like that.  

I wanted to figure out: is there a way that I could like analyze things in the world and then generate a noun phrase? So I was kind of playing around with just this idea of ‘How could I generate noun phrases that are humanlike?’ And that was before I knew about natural language processing, that was before this new wave of AI interest. I was just kind of playing around with trying to do something that was humanlike, from my understanding of how language worked. Then I found myself having to code and stuff to get that to work—like mock up some basic examples of how that could work if you had a different knowledge about the kind of things that you’re trying to talk about.

And once I started doing that, I realized that I was doing essentially what’s called natural language generation. So generating phrases and things like that based on some input data or input knowledge base, something like that. And so once I started getting into the natural language generation world, it was a slippery slope to get into machine learning and then what we’re now calling artificial intelligence because those kinds of things end up being the methods that you use in order to process language.

So my question is: I always hear these things that say “computers have a x-ty-9% point whatever accuracy in transcription” and I fly a lot. My frequent flyer number of choice has an A, an H and an 8 in it.

Oh no.

And I would say it never gets it right.

Right.

And it’s only got 36 choices.

Right.

Why is it so awful?

Right. So that’s speech processing. And that has to do with a bunch of different things including just how well that the speech stream is being analyzed and the sort of frequencies that are picked up are going to be different depending on what kind of device you’re using. And a lot of times the higher frequencies are cut off. And so words that when [spoken] face to face or sounds that we hear face to face really easily are sort of muddled more when we’re using different kinds of devices. And so that ends up especially on things like telephones cutting off a lot of these higher frequencies that really help those distinctions. And then there’s like just general training issues, so depending on who you’ve trained on and what the data represents, you’re going to have different kinds of strengths and weaknesses.

Well I also find that in a way, our ability to process linguistics is ahead of our ability in many cases to do something with it. I can’t say the names out loud because I have two of these popular devices on my desk and they’ll answer me if I mentioned them, but they always understand what I’m saying. But the degree to which they get it right, like if I say “what’s bigger—a nickel or the sun?” They never get it. And yet they usually understand the sentence.

So I don’t really know where I’m going with that other than, do you feel like you could say your area of practice is one of the more mature, like hey, we’re doing our bit, the rest of you common sense people over there and you models of the world over there and you transfer learning people, y’all are falling behind, but the computational linguistics people—we have it all together?

I don’t think that’s true. And the things you’re mentioning aren’t actually mutually exclusive either, so in natural language processing you often use common sense databases or you’re actually helping to do information extraction in order to fill out those databases. And you can also use transfer learning as a general technique that is pretty powerful in deep learning models right now.

Deep learning models are used in natural language processing as well as image processing as well as a ton of other stuff.

So… everything you’re mentioning is relevant to this task of saying something and having your device on your desktop understand what you’re talking about. And that whole process isn’t just simply recognizing the words, but it’s taking those words and then mapping them to some sort of user intent and then being able to act on that intent. That whole pipeline, that whole process involves a ton of different models and requires being able to make queries about the world and extract information based on… usually it’s going to be the content words of the phrase: so nouns, verbs things that are conveying the main sort of ideas in your utterance and using those in order to find relevant information to that.

So the Turing test… if I can’t tell if I’m talking to a person or a machine, you got to say the machine is doing a pretty good job. It’s thinking according to Turing. Do you think passing the Turing test would actually be a watershed event? Or do you think that’s more like marketing and hype, and it’s not the kind of thing you even care about one way or the other?

Right. So the Turing Test as was originally construed has this basic notion that the person who is judging can’t tell whether or not it’s human-generated or machine-generated. And there’s lots of ways to do that. That’s not exactly what we mean by human level performance. So, for example, you could trivially pass the Turing test if you were pretending to be a machine that doesn’t understand English well, right? So you could say, “Oh this is a this is a person behind this, they’re just learning English for the first time—they might get some things mixed up.”

Listen to this one-hour episode or read the full transcript at www.VoicesinAI.com

[voices_in_ai_link_back]

Byron explores issues around artificial intelligence and conscious computers in his new book The Fourth Age: Smart Robots, Conscious Computers, and the Future of Humanity.

Read Full Article
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 
Gigaom » Big Data by Andrew J. Brust - 2M ago

Self-Service Business Intelligence (SSBI), while somewhat nebulous in its definition, is a clear product category that over the past few years has gained significant market share and mindshare. This is especially true with organizations looking to liberate their data from IT-managed silos, leverage their power users to do more with the data already in their systems, and allow for additional data sources to be brought in and blended in for analysis. For many organizations, the result has been to give frontline analysts, as well as executives, beautiful, modern ways of interacting with and visualizing data. SSBI also facilitates deriving insights from that data and, finally, making these capabilities accessible to all levels of the organization, technical and non-technical, both in-cloud and on-premises.

In this report, we explore SSBI products and technologies, outline the key differentiating factors between the major offerings, review major incumbent vendors, and identify the most promising contenders. We also describe how organizations can take advantage of the SSBI paradigm to satisfy their users and the challenges they face.

Key findings:

  • Self-service BI is a rapidly evolving sector with new capabilities added at a quick pace. Numerous BI vendors have made significant innovation investment around data access, blending and mashup, visualizations, data “storytelling,” AI-generated insights, and embedded analytics.
  • Most vendors offer a variety of capabilities with various levels of success at integration. We found that the major areas on offer include (1) modern data visualizations; (2) a variety of connectors to disparate on-premises and cloud data sources; (3) data blending, mashup and visual extract, transform and load (ETL) aimed at non-technical users; (4) embedding, programmability, and data science platform integration aimed at developers; (5) natural language/conversational search of the data; (6) AI-generated insights and visualizations; (7) mobile capabilities for iOS and Android; and (8) on premises, public, private, and hybrid cloud deployments.
  • There is no canonical, universally agreed upon definition of self-service BI. Although, to most people, it is the combination of ease-of-use, prolific data connectivity and blending, user-friendly data visualizations, and reduced need for IT involvement. Vendors attempt to define SSBI according to their own strengths, which can lead to confusion in the market, as organizations grapple with both the technology and the cultural/process changes required to implement it successfully.
  • Self-service BI puts the business user at its core and tailors its products and marketing pitch to reducing, or even eliminating, traditional IT from the equation.
  • There is a clear set of incumbents in the marketplace, with Tableau historically having the most mindshare; and other BI players, such as Microsoft, Qlik, and TIBCO having modernized their offerings in order to compete.
  • The new formidable set of contenders include ThoughtSpot, Zoomdata, Looker, and SiSense. They differentiate themselves in key areas of UX, conversational search, and AI-generated insights that both pose threats to the established vendors and may influence their future offerings.
  • Vendors are evolving their products at a fast pace, to close any functionality gaps and position themselves as holistic platforms with as many capabilities as possible. Vendors can no longer compete as simply data visualization or ETL platforms. As such, they are all slowly converging to a similar slate of functionality over the long term.
Read Full Article
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

The world of Artificial Intelligence (AI) is at a crossroads between potential and accessibility. AI is powerful, and far more actionable than it has been in the multiple decades since it emerged as a discipline, and yet, the amount of trial and error, hunch, and bespoke effort involved in doing rigorous AI work is still significant. Of course, this is the case with many new technologies as they cross the chasm between what they can do theoretically and what they are able to do practically and efficiently.

The game changer in this apparent quagmire is automated machine learning (increasingly known simply as AutoML). AutoML is busting the monopoly that highly-trained data scientists have over profitable and advantageous use of AI, because it enables non-specialists to work through the bits of AI that were previously off-limits. It’s a win-win though, as AutoML also helps those data scientists work at a higher level and get more of their specialized work done.

Labor of Love

After selection of a data set and its target/label column, manual AI (if we may use that as shorthand for conventional, work with machine learning and deep learning, without use of AutoML) involves several steps:

  1. Complex data preprocessing, including steps called feature extraction and feature engineering (features are the columns in a data set whose input values are germane to the model’s predictions).
  2. Selection of one or more algorithms with which to build a model or set of models.
  3. Setting values for the algorithms’ so-called hyperparameters.
  4. Training the model.
  5. Testing the model.

There is more, too, including model selection, deployment of the model for production use, hosting the model (complete with the generation of a Web services REST API interface), and post-production management of the model, including retraining it if/as patterns in the data change and the model’s accuracy degrades. This ML workflow is illustrated in Figure 1.

Figure 1: The manual ML workflow, various phases of which are automated by AutoML

Counter Service

AutoML platforms automate various subsets of the above steps and the technology is already making AI accessible to non-specialists. That is as it should be for any technology that seeks to become mainstream. Lots of people drive without being mechanics. Likewise, information workers who are well-acquainted with their data, and have strong motivations to build predictive models around it, should be able to so without the assistance of a specialist. However, it is still early days for the technology and looking at each AutoML solution creates the impression that vendors are still getting their bearings in this market, as are users. Things are still gelling and in their formative stages. Coming up to speed on AutoML in the pioneering days has its advantages though; it lets us get a leg up before the technology is ubiquitous.

Full Range

This report provides an overview of AutoML solutions spanning the open source and commercial worlds, the major public cloud providers and vendors of software that can run either on-premises or in the cloud, application programming interface (API)- and command line interface (CLI)-level solutions, as well as products with full-on user interfaces (UIs); and solutions aimed at business users, data scientists, or both.

Read Full Article

Read for later

Articles marked as Favorite are saved for later viewing.
close
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

Separate tags by commas
To access this feature, please upgrade your account.
Start your free month
Free Preview