The Convergence of Cloud, Edge and AI
Sign up for Daily CommTech Brief here.
Session Abstract:
Hundreds of thousands of gigabytes of data are generated every day in AI applications ranging from healthcare, logistics to smart manufacturing. With this much data generated, the key consideration is where that data should be processed. An understanding of critical at-the-edge intelligence versus non-real time critical intelligence is necessary as some applications require cloud computing and others at the edge. Or, is there a hybrid approach that will be more prevalent? This session will set the stage for how Edge and Cloud play critical roles for AI driven technologies. Joining this session are Shravan Vallala, VP of Technology at Robin.io, Mostafa Essa, AI and Data Analytics Senior Director at Vodafone, and Wei Yeang (Max) Toh, GM of Market Development at Intel.
Executive Speakers:
Shravan Vallala - Vice President of Technology, Robin.io
Mostafa Essa - AI and Data Analytics DE/ ADS Senior Director/ VP, Vodafone
Wei Yeang (Max) Toh - GM, Market Development Organization, Intel
Transcription
THE CONVERGENCE OF CLOUD EDGE AND AI
Abe Nejad: And hundreds of thousands of gigabytes of data are generated every day in AI applications that are ranging from healthcare logistics to smart manufacturing. With much of this data generated the key consideration is where the data should be processed and understanding how critical at the edge intelligence versus non-real time critical intelligence is necessary as some applications require cloud computing and others at the edge, or is there a hybrid approach that will be more prevalent?
This session will set the stage rather for how edge and cloud play critical roles for AI driven technologies. Joining this session are Mostafa Essa, he's AI and data analytics, distinguished engineer at Vodafone. Next to him is Shravan Vallala, he's vice president of technology at Robin.io. And on the end is way Wei Yeang Toh, he's general manager of market development at Intel. Gentlemen welcome.
Everyone: Thank you.
Abe Nejad: Thanks for being here. I hope I've got your names, your pronunciations and your titles, correct? Wei if you don't mind, I'm going to start with you on the end there. What is really AI's value in enabling sustainable, secure, and also data rich future for 5g and edge?
Wei: Thanks for that question. That's very good question. So if we look, 5g is here today, right? And the edge itself. You're going to drive a lot of data into the network pipe and being able to turn the data that's coming in become an asset is very important. And that's why AI come into the mix. AI will do the magic without data set. So now having a data set question, say, how do I implement AI and use it in a very ethical way as well. So I'll just throw like two examples for sustainable, as well as the secure.
Just take two examples. Sustainables it means green, it means power managements. What we showcasing for this time around, in this year, one classic example is how do we use AI to manage the power at the system level so that it will go accordingly based on the workload. So it's a workload driven base of the AI power management, whereby if you hit the maximum workload, you take the powers hit. If you don't need the maximums workloads running, you can turn it down automatically with a closer automation. And that save a lot of the energies coming up on the whole system level. This is one example in a more sustainable way.
If you look at security, same thing as well. If I just take a SAS, Secure assess at the adge as examples we in insert the AI capability with our partners and whereby the security itself will do a early prediction in terms of the potential of the intrusion that coming in, and that will help the data become more secure and it help to protect the user as well of the whole data set. So AI is critical again. Data set is an asset, using AI to unlock the capability, that's super critical.
Abe Nejad: Shravan automation and orchestration, sort of your wheelhouse there. How does that play a critical role in AI driven applications?
Shravan: Yeah, so like what Wei mentioned, there is data, there is a lot of data because of the number of devices that are connecting to the 5g network today, that data now has to be processed. And with the advent of edge computing, that data is now getting processed at the edge. Now that means there will be a lot more edge nodes. There is a need to have automation in place to boot stamp these edge nodes. And that's where Robins [03:49 inaudible] that we have provides an ability to automate the installation of the OS firmware upgrades, FPGA flashing, programming, configuration, et cetera, which allows a smooth transition of these edge nodes from once they're raged and stacked, they can be ready to deploy applications.
Once the applications are ready to deploy. Most of the times today in the AI world, the applications are microservice based and the defacto orchestrator is Kubernetes, which enables these microservices to be hosted seamlessly. So now the AI application is ready to be deployed. These AI applications are usually very complete intensive. So you have requirements like [04:35 NEW] awareness, CPU pending. And so you need an orchestrator, which is smart enough to place these workloads at the right edge nodes. Once the application is deployed, now comes the next phase where you have LCM operations.
These are the day in operations that you have to perform, where you have to manage the node. And you also have to manage the application. You have to make sure that nodes are upgraded periodically. You have to make sure that applications are upgraded periodically. So you need a workflow to automate this large number of edge node that are now present.
Abe Nejad: Yeah. Mostafa, let's get to some use cases here. Can you tell us maybe a use case or an example of how AI at the edge really allows these AI applications?
Mostafa: Sure. As you know, we can talk about different use cases that's related to the latency and saving the spectrum of transmission and so on. Moving the processing part of the AI from the cloud into the edge will allow us to do much more things and much more benefits by this transformation for the data. At the cloud, for example, in the old days, we were doing the processing and inference at the cloud. Right now we can do the train model at the cloud and then push it directly to the edge and then the edge do the inference there. So the latency and hold the action time for the machine or for example, for the autonomous cars to do an action according to the machine learning will be much, much shorter than the old [06:18 inaudible] thing.
For example, you can talk about the autonomous vehicles, but I will back again to our expertise for the mobile network for example, to informing for example. This is one of these cases that you can rely on the edge, machine learning algorithm, because you need to do the memorability between the beams and the antenna beams from one place to another. And you can concentrate the beams in this area, if you have something like a traffic jam, and when the traffic jam have been full, released, you can redistribute the beam forming again, and you can predict what is the next beam forming activity or action that you want to do. If you do it on the cloud, you will lose a lot of time and you do it at the edge it'll be much better.
So this is one of the use case that you can use it in the telco, as well as the autonomous vehicles, as I told you, which is very famous. The car can do the action automatically at the edge without taking too long a time or longer time for the chain to be transferred to the sensors to the cloud, and then them then process them in back again to the steering, for example, to move right or left and avoid something. So these are two examples.
Abe Nejad: Shravan can you give us another use case, maybe a new use case, and maybe that we haven't heard yet where you would feel more comfortable to do the compute at the edge rather than in the cloud, as Mostafa mentioned?
Shravan: At the edge or at the cloud?
Abe Nejad: Well, yeah, at the edge, in the cloud correct.
Shravan: So yes, when you have a requirement to do real time or near real time decision making, it's the edge where your AI application should run, but there are various scenarios where this is not feasible. For example, if your decision that you have to make, depends on not only the current dataset, but depends on, let's say historical, large historical dataset that can't be done at the edge. You don't have the storage capacity there at the edge. And other thing, other use case I can think of is if your decision depends on not only your local data set, but a group of our geo distributed locations, where you have sensors or something, that's sending these data. So at that point, your edge will not be able to make the decision.
So then you have to move the decision to the cloud. For example, there are a lot of applications which can't be released today on the edge, for example, genome sequencing, weather prediction, or even natural language processing. Those things cannot be done at the edge because of the computer requirements and the data requirements that these applications have.
Abe Nejad: Any scenarios or new applications that maybe haven't been talked about as much as maybe autonomous vehicles.
Wei: Well, I kind of point back to what Mostafa says as an example, funny enough we're showcasing the same example in our booth, beam forming. It is a true foundation for a lot of use cases. We don't want to create a technology that just be applicable for one use cases and beam forming is a perfect example. Quite applied applicable across the different use cases. What we showcase here for this time around is the AI driven beam forming. Just exactly the same thing as I thought, it's funny that we have exactly the same demo there. Using the realtime risk and non realtime risks as the foundation to host the beam forming so that you have third party coming in to drive the innovation. And different vendor have a different beam forming implementation. So you need that common architecture to host the capability and that's where AI come in as well to run at the edge. So I thought it's a very good example. I'm kind of second there as well.
Abe Nejad: Does the beam forming use case or example apply to the hybrid sort of approach towards both edge and cloud, AI tech technologies? Can maybe you walk us through that.
Wei: Because you have real non real time. For real time, definitely you have to run at the edge, but for some of it you don't force it to run everything at the edge as well. Edge is pretty expensive. It's not cheap. [10:33 inaudible] price. So you're going have some non real time as well that you will utilize, assisting the engine running at the cloud. Using whatever it is there, don't reinvent it, use it, right. Whatever that you need running at the real time use it at the edge itself. So it could be one example and depending on the vendor itself. How the latency requirement is. You don't need to force feed it to just running , just simply at the edge. So yes, Shravan says, yeah, it applies for the use cases as well.
Abe Nejad: Shravan we're talking about cloud edge and AI technologies and applications. What do you think that conversation would sound like this time next year?
Shravan: Yeah. So hybrid edge is how hybrid AI is how things are running today. If you'll take a look at any smart device, there is a minimum intelligence at the edge device to filter out what data needs to be sent to the cloud edge to be sent to the cloud and the sophisticated decisions happen in the cloud. For example, if you take a smart camera, it is continuously recording video, but you don't stream all the video. The device, the end device has enough intelligence to detect an event and then send only a part of the video segment, which can be further analyzed in the cloud. So a lot of application, a lot of smart devices, home appliances that we have today have a hybrid AI implementation
Abe Nejad: Mostafa the future of AI. When we're talking about hybrid edge and cloud.
Mostafa: Yeah. I believe that, I'm second Shravan in this thought. I believe that the hybrid is the most optimum things because if you are talking, I'll go back again to first talk about the cost. When you put a lot of processing units at the edge, you pay a lot of money. But you save on the other side of the spectrum, which is one of the precious aspect in the equation. So it's like a balance. What is the benefit that you get by adding this processing unit at the edge and you have to do this kind of calculation at the beginning. So I think the hybrid is most optimal thing because you can do the balancing and always continue to optimize the system. For example, another use case I will try to take. For example, at the power and energy saving, we have a go green, for example, an initiative in Vodafone.
You can do the edge part for closing the spectrum or the antennas when you have a low traffic at midnight for example. But this is for the edge because you to do with a very fast action. But on the cloud, you can do a lot of analysis that you can save your battery lives for example, and you can do some charging patterns to find, because you don't want to do the charging all the time. Maybe sometimes to save the battery life, you have to do charging. And this is charging even if have a power, a commercial power is on. You don't have any blackout. So this kind of analysis that you need a lot of training and a huge history deficits that you will need the cloud part. So you have to do the hybrid and you can do this kind of study all the time to reduce the cost, enhance the performance and the efficiency for our customers.
Abe Nejad: Did you want to respond? Because I want to just touch on the spectra efficiency savings. I feel like if there is any reticence to move towards AI technologies or applications and I want to go down the row here, in your mind what would be the number one challenge that people are concerned about before they adopt hybrid or they adopt edge? Is it purely monetary or are there other challenges?
Wei: Not quite only. I touched on couple things. It is about the efficiency. It is about driving the innovation as well. Because when you do hybrid, you're opening up possibility of innovation, for different players to come in and participate. And then it is about using it in an ethical way as well. I can't stress more, more enough, in every ethical way. So the challenges is the industry itself today when they come together. It will be easier to provide a standard tools and delay, but then per use cases, you have to optimize a capability so that it fit better into each vertical. Then it help the ecosystem come in and participate more.
You want to encourage more people to come in, to embrace the technology. So it's challenging, but then it's heading towards that direction right now. You have common say tools right now and embrace and the industry play is optimized based on the common. The common [15:36 inaudible], that is a very encouraging situation right now. We see the ecosystem coming together.
Abe Nejad: Yeah. Shravan and from your perspective, the challenge or number one challenge as people move towards AI. What would be the reticence there? What would be the reservation to move slower than quicker?
Shravan: I just want to add one thing, is that based on the use case, some of the use case might mandate you to process certain information at either edge or at the cloud. For example, if you have secure data, like for GDPR reasons, you have to probably process some of these data at the edge. What was your question?
Abe Nejad: Oh, no. Well, if entities have a reticence or they are not moving towards AI technology as quickly as they should, what would be the reason for that? And of course, Wei mentioned a few factors.
Shravan: I would say cost and also the value that you get out of it.
Abe Nejad: Right. He mentioned that, that you have to sort of balance it.
Shravan: Yeah. So it's not that just because you have AI, every application has to have it, but it's getting there. There are also enhancements in the hardware to realize some of this AI competition. So it's going to eventually be there, it'll be everywhere.
Abe Nejad: From an operator's perspective any reticence on your part to move towards, not in your department. That's your core competency, but what have you heard, I guess around AI and people's challenges in moving towards that?
Mostafa: I think I cannot add anything more than what they said. It'll be something related to the privacy and the data existence because this is a personal data. You don't want to expose it to anyone, to do maybe more innovation, but you have to do this innovation with some law enforcement. Then that's why you have to maneuver between this and be more cautious when you are dealing with the data of our customer [17:38 inaudible], which is something very, very critical, as well as the cost, they say that the infrastructure was always available in all the countries. Some connectivity for the edge or terminals, for example, the technology [17:53 inaudible] will improve this kind of connectivity because you have the machine itself, it already have the tool to be connected to the network.
You don't have to buy the sim card and [18:05inaudible]. So most of the technology, as you see, is both the ecosystem of the technology right now are trying to accelerate the connectivity, the data collections, so on. But the challenges are still the same, the cost, the infrastructure, the ethical part, and as well as the law enforcement.
Shravan: Some data patterns might not actually generate anything useful. At the end, your data has to generate a model that can be trained and can be reused to provide some inference. I don't know if all data, is driven towards providing some AI constructs.
Abe Nejad: Good point. Well, I think you all brought up use cases and interesting factors of the evolution towards AI technology and applications. Wei thanks so much for your time. I think beam forming something you brought up and something that you have in your booth actually here at Mobile Conference is something that should be talked about a lot more, a lot of sort of tangential things we can talk about around that. So thanks for bringing that up.
Shravan we haven't done this before. This is our first time, you did a fantastic job. And in fact, you're on another session immediately after this. So we have you for another half an hour or so. So great information automation orchestration of course, is your core competency. And we're glad that you brought that to the table and Mostafa it's always good to have an operator's perspective. I know you guys are super busy, so we're glad, glad you made it in. And hopefully we can do it again. Not virtually this time. We'll do it in person again, hopefully.
Mostafa: Sure, sure. My pleasure.
Abe Nejad: Thanks again for everyone's time.
Thank you.
For any inquiries, please email anejad@thenetworkmediagroup.com