



Edge computing shipping solutions by Pratexo featuring intelligent automation and communication systems to the largest shipping fleets in the world.
CHALLENGES
- Ships becoming sophisticated sensor hubs and data generators, producing and transmitting massive amounts of information
- Ship systems require overall coordination/control with minimal on-ship human oversight
- Challenging environments require high resiliency
SOLUTIONS
- Secure micro cloud running on each ship
- Resilient systems able to run offline, distributed locally with other ships, or connected centrally
BENEFITS
- Ship systems reliable, coordinated and optimized
- Foundation for fully autonomous ships
- 10 to 20% fuel savings = massive CO2 reductions
- Massive savings on satellite link costs
Watch our latest webinar:
For more information, contact Pratexo
Webinar Transcript
Blaine: Thank you for joining us for Shipping on the Edge – Transforming Shipping with Next Generation Technologies brought to you by Pratexo and Telenor Maritime.
Let me introduce the speakers so I’ll start with myself. My name is Blaine Mathieu. I’m the CEO for Pratexo and am happy to be here with you today. I’ve spent the last 30 years in enterprise software and the last six or so years directly in the IoT AI machine learning edge computing space, and very excited to be talking with you. Mats, would you like to introduce yourself!
Mats: Sure, my name is Mats Olsson; I’m the EMEA GM for Pratexo. I have a long international career behind me having worked for companies like Dresser, Halliburton, and General Electric developing technologies for the energy sector.
Blaine: Thank you, Mats and Knut!
Knut: Thanks, Blaine. My name is Knut Fjellheim. I’m heading the IT department with Telenor maritime. I have more than 25 years of experience with connectivity and mobile technology. I founded the company back in 2002. I am focused on adapting and standardizing land-based technology into the maritime segment. That’s me.
Blaine: Thank you. Before we dive into the content, let’s do a brief intro to Pratexo and Telenor Maritime. So Pratexo is the intelligent edge computing and distributed cloud platform. As we’ll talk about a little bit later in the presentation today, it’s fundamentally about accelerating the time and reducing the cost to design, test and provision the kind of architectures required for Shipping to take Shipping into the next generation. The company was founded in 2019 with locations in Norway, Sweden, and Austin, Texas. Knut, how about introducing Telenor Maritime!
Knut: Yes, Telenor Maritime is delivering services into six different segments today. We have the cruise and ferry segment to the left, where we deliver or manage connectivity mobile and wi-fi services. In the middle, we see the oil and gas segment. Here we are delivering public and private LTE network services. And to the right, we have the fishery and the merchant segment where we are delivering our hybrid connectivity services. The key takeaway here is that we are adapting standardized land-based technology into the maritime segment. That’s us.
Blaine: Perfect, thank you. All right, let’s dive in. Maybe I’ll set the groundwork by talking about some of the challenges facing the shipping industry today.
I am sure many of you on the call are aware of these, but I don’t want to only focus on challenges; I want to also focus on the opportunities, so let’s dive in. So I’m sure many of you now have technology meetings and board meetings, and strategy sessions about these kinds of next-generation shipping applications. You know, connecting the supply chain from ship to shore and back again; standing up integrated ship operating systems; and smart performance management; and, of course, predictive maintenance has been very hot for a few years now and continues to be critical in terms of ship operations; intelligent voyage optimization and especially you know safety and security of people and systems in on and around the ship.
So pretty well, any kind of application that needs to be run in real-time to sense data that’s flowing, analyze it, and then take action in real-time as part of a next-generation shipping application. As my former colleagues at Gartner have been predicting for some time now: most of the data currently generated by IoT devices and other systems is still being pushed up into centralized clouds. We know that’s going to change dramatically even over the next three or four years. Of course, that’s even more so the case in the shipping industry, where it can become very expensive to push all that data up to a centralized cloud via satellite links.
So definitely some meta trends going on there. Fundamentally, as we’ll discuss in more detail throughout this presentation, to provision and run these next-generation ship applications, you require what we’re going to be calling a ship-based micro cloud or a micro cloud at sea – fundamentally bringing the right capabilities of a central cloud down literally onto the ship.
Of course, to serve ship-related use cases, these clouds have to be very resilient, right? You can’t allow your ship systems to go down, and we’ll talk about how we do that in a few minutes. Scalable, secure, and also fundamentally open, you know you don’t want a black box running on your ship – you want to know exactly how the system operates and functions.
Now the challenge, though, is this can be quite hard to do. Most ship-related next-generation application POCs and pilots coming into production right now are being done for the first time. The team that’s standing them up hasn’t done many of these things before, and so that means you know most projects are custom one-offs. These POC’s can be slow to stand up. In fact, we know the data is well-publicized now that 70 to 80 percent of IoT-related POCs (proof of concepts) ultimately don’t actually get into production.
It’s one thing to stand up a system at the POC level, but it’s very different to stand it up at scale, ingesting large amounts of data, processing it, and then running these applications in the real world. So, another way to think about this is this circle of pain. You’ve got all of this complexity designing, building, deploying, and managing these applications that we’re going to talk more about. They have to be resilient, scalable, and secure. And then, of course, it’s not just about implementing them once – it’s about supporting them through the life cycle of those applications as they and the ships they run on continue to evolve and get new sensors, new systems, etc.
And then we’re caught in the speed and risk continuum, so the faster you move in implementing these systems, the higher the risk level that you’re taking. Of course, you know we need these systems to run perfectly every time. Data and system fragmentation is another thing we see in many applications in the space – but certainly in the shipping industry. So these systems, machines, sensors are increasingly coming online. They are putting off increasing amounts of data, but they are very siloed. They’re disconnected from each other, so you end up with a dozen or more separate control systems that are related to that particular piece of data that’s flowing into them. You don’t have this generalized data layer – the democratization of data is what we talk about across the ship – that would allow you to create much more powerful applications that we’ll talk about again in a couple of minutes.
So imagine if you were able to do that right. Think of what could be done with a ship-based micro cloud approach running near the edge on the ship. Many different use cases related to safety: ensuring that you know the maximum number of personnel in an area is not exceeded; using object recognition to detect if a person is in the area; controlling fire suppression systems, so the right kind of extinguisher goes into an area depending on whether or not people are present.
Of course, using object recognition to detect and classify objects to see if they actually are a danger to the ship and pose detection technologies running on the edge to alert someone if a crew member falls down or even falls overboard. These are obvious use cases of running these kinds of systems on the ship itself that could, frankly, never be done if you had to push all of this data into a central cloud. Security applications are again making sure not just that the number of people but the right people are in a particular area of the ship at particular times. Of course, all the IoT-related sensors and systems related to supply chain, cargo control, sensing temperature, location, vibration – you name it – are coming on very strong these days. And then the MRO side (maintenance repair and operations) first of all, sensing the status of mechanical systems but also using that data to actually be able to predict future status and potential failures to prevent them from happening in the first place – an interesting use case.
I was working recently with a customer around taking dumb gauges that weren’t IoTized yet – that weren’t actually putting off digital data streams – by putting a camera in front of them; you could easily convert those readings into digital data streams and prevent having to have a person continually monitoring that set of gauge. Absolutely possible today with these technologies running on or near the edge. If you bring it all together into overall ship systems, it gets back to this notion of the ship-based micro cloud allowing the democratization of data across multiple applications.
Instead of having all of these siloed data systems running, you can have one core data backbone and have multiple applications running on top. This is a theme we’re going to repeat throughout the rest of this presentation. Fundamentally a lot of these use cases are continuing to get us closer to this future of increasingly autonomous Shipping.
Now, I started with the words “imagine” here, but the reality is this isn’t imagination. Every one of these use cases is being built and supported right now. The only question is how quickly are you able to get it up and running, and how quickly and are you able to scale it right beyond the POC stage into a true production? Of course, if you can do that, the value is extremely high to different stakeholders.
So, imagine again you’ve got this platform for innovation and digitization for ship owners and operators. It’s an innovation platform that democratizes the data in, around, and across the ship leading to amazing cost savings. Breaking down these technology silos allows you to rapidly stand up more applications to improve ship operations and not just for one ship but, of course, these ship-based micro clouds can be communicating with each other across the fleet and with your central systems connected into the head office.
Insurance agencies, financial institutions, classification agencies are all very focused on inspections improving safety, and lowering operating costs, especially around environmental, social, and governance goals. These can be achieved by more effectively having ship systems work together instead of in isolated silos.
The shipping solution providers – the companies that are actually building the tech, the software, and other applications that would sit on this micro cloud at sea – have a huge stake in this as well. They don’t want to be building their own infrastructure stack from scratch every single time if they could take their applications (some of which were originally designed to run in a central cloud) and instead move them onto the ship rapidly. That’s very much in their interest.
Finally, the hardware OEMs – the companies that are producing the machines, the equipment, the sensors – they’re increasingly trying to find ways to drive more value out of their equipment to enable hardware-as-a-service instead of a one-off purchase. By connecting their hardware into this ecosystem – this data and compute backbone we’ve been talking about – there’s a lot of opportunities for them to show that their hardware is much more powerful than it might initially seem.
Let me wrap up the intro session by talking about the maritime solutions deck, and both Knut and Mats are going to go back to this repeatedly.
So here’s the core framework we’re going to start with. Think of all the hardware devices, sensors, plcs that are running across the ship, increasingly putting off higher and higher volumes of data. Then, at the other end, you’ve got all the potential applications that could be run on a ship today. Knut’s going to detail some of these a little bit more and Mats as well.
Fundamentally, what Telenor and Pratexo have is that middle layer to collect data; unified hosting service, and all-around a ship-based micro cloud that lets those applications run on one integrated, connected system on the ship more powerfully and more quickly than they ever could if you’re trying to implement all of them individually. That’s what we’re going to be talking about for the rest of the presentation.
Another way to think about this is to think about it as an app store at sea. So the way you have an iPhone or an Android device that is capable of running multiple applications sharing the common data stream and common capabilities that are on the iPhone. Well, it’s exactly an analogy to what we’re able to do now on a ship. You don’t want to have a separate iPhone for every application you run; you want one application, one iPhone that’s able to run all of your applications, and that’s exactly what is now possible in the maritime environment.
With that said, I am now going to turn it over to Knut. Please take it away!
Knut: Thanks; I will start with giving you a recent real-life story. By getting access to critical onboard data, we created an onboard safety risk-assessment application. It started with a challenge: the company had been contracted by an insurance company. They were having an issue with a vessel for which they needed to pay out insurance. Quite too often, the vessel had repeating incidents caused by different failures onboard the ship, especially the engine and the engine failure making the vessel hard to maneuver.
Together with a software company, we created a digital safety risk assessment application. The safety risk application is a digital checklist where the crew is reporting operational status in real-time. To run the digital checklist, we needed input such as fuel and engine performance data in real-time. At that time, Telenor Maritime had already started developing the unified hosting service and had the technology to collect and share the data in real-time.
The result we obtained by introducing this safety risk application: a safe operation of the vessel; the insurance payouts are reduced to a minimum, and the vessel owner is saving money. Everyone is happy!
This is one of many examples where we see the benefits of digitalization onboard a typical vessel architecture. As you know, a ship is stacked with a lot of different apps and application vendors. We call this a non-integrated vessel architecture. On the land side, we have already digitized the ecosystem – built on open standards. The different application vendors are sharing data through open platforms with my team.
We need to enable the same ecosystem on the ship – but can you guess how many suppliers are coming into the different market segments these days? From the start-up world webpage, there are nearly 300 application vendors. It will be complete chaos if the fuel optimization provider, the digital nav provider, and the digital checklist provider have to install their own separate hardware and software for the applications onboard.
There will be a challenge if all these application providers do not share the data stored within silos and access to data is restricted. You will not be able to handle the huge amount of business agreements, support agreements, etc. So, how do we solve this complexity? We created something quite unique. As I said, the unified hosting service opens the business boundaries and breaks up the existing data silos.
I will start explaining Telenor maritime service from the bottom. To collect the data, we have the vessel data collector that integrates according to classification requirements. We integrate the data collector towards the bridge on engine control systems or other systems onboard. It collects and standardizes the data into an ISO format. The data are sent to the onboard unified hosting service into a middleware software layer. As you can see in the stack, the unified hosting service hosts different microservice applications in the same way as the iPhone.
We can install different VMs on the same platform. It integrates data from different shipboard systems into common data storage. It then adds meta tags to the data according to a maritime context so the data can then be shared in the same way as a land-based ecosystem. By standardizing this data, all applications can understand. It’s like a common language between all applications.
To succeed with the digital transformation, connectivity is also an important part of the value chain. Telenor Maritime hybrid connectivity solutions secure the increased demand for bandwidth. Within the circle, we have a high-frequency radio solution that provides global low-band connectivity. This solution is mainly used to send IOT data, which are small data, to the shore-side. Mobile broadband solutions are providing low-cost connectivity.
We deliver traditional solutions also and, these days, we are integrating Leo, which will provide fiber-like quality to the vessel. It’s the next generation of satellite systems.
So the key takeaway here: the hybrid connectivity solution will provide you with a seamless connectivity platform, including the required cybersecurity. When we move to the land side, it’s also important all the data coming from the vessels are being sent from our cyber-secure infrastructure to the different cloud solutions. In addition, we can provide remote connectivity, which will be more and more important for the different service providers as we use the safety risk assessment application.
And now I will let Mats explain in more detail the ship-based micro cloud setup.
Mats: Thank you, Knut, for that insightful presentation. Now let me dive into my component. Here let’s look at what’s inside the green circle. Knut mentioned earlier how you have a micro cloud running at sea, and they showed an example of a small server running on the bridge.
Now let me explain how Pratexo powers and enables this micro cloud. So what we’re going to talk about is the middle layer – the infrastructure hosting different applications. That is the architecture component run by Pratexo. That is what you see in the green circle here. This is like an architect building a city with roads and streets, ensuring that the parts fit together. Buildings are accessible, and traffic flow is optimized. That is what we do.
In this type of world, as you may know, a cloud is actually made up of many different components, standards, and tools. You may have heard about technologies like Kafka, Gluster, MQTT, Kubernetes, Docker, and so on. Fundamentally what Pratexo does is to take all those elements, all that complexity, and simplifies it for you. It all connects together and works for you in an ultra-stable manner. Data flows are standardized, and data is delivered where it should be, in the right place and right time.
Since the system is based on open technology, you as a customer always have full access to the entire technology, and, in fact, the way it forms that cloud is by taking those components and tools and spreading them between multiple nodes running on the ship.
Let’s look here for a moment at the picture. You see the ship there with some small blue boxes that are Pratexo nodes spread around the ship. There are several advantages to having an architecture like that.
First of all, these Pratexo nodes are located closer to where the data is generated, so you can ingest data and process that data in real-time, which is, of course, a big part of the value proposition of edge computing. Because you are distributing the computing load among multiple nodes, you also create a very resilient system. Now resilient means, in this context, robust – avoiding a single point of failure and being fault-tolerant. So if one of the Pratexo compute nodes or edge nodes goes down, the others can take on the load and continue the processing of data around and across the ship.
By joining these nodes together, they fundamentally form the ship-based micro cloud at sea that you heard Knut and Blaine mention in the earlier parts of the presentation. These clouds cannot only unify the data flows and communication across the ship but also have the capability of sharing data between ships and with a centralized cloud. The shipping company probably is running security ensuring that all standards are met, and risks are minimized – this is also an important part of the solution.
Now let’s look a bit deeper into the platform capabilities. You’ve already heard that this is an open and secure platform, highly scalable and highly resilient. The platform does not only handle the creation of the architecture; it handles all the elements throughout its lifetime to minimize TCO (total cost of ownership). So, therefore, the platform is helping you with configurations, and once you have configured a system, you could simulate it up in the cloud to ensure that it meets your requirements or redo the configuration.
In any case, you do that with a push of a button. You can do provisioning very fast. The system supports life cycle management, including software updates, security patches, and so on. Ongoing performance is monitored, and optimization is taking place to optimize the load between different computer nodes to get maximum value out of the hardware you have.
We have been talking about the Pratexo node and the micro cloud architecture, and we have also talked about the different Telenor Maritime layers being part of this infrastructure. With this, we are creating the maritime digital ecosystem enabling the use of different applications. Software containers can be added to the platform, just like you add an application to your smartphone. You, as a shipowner, should be able to select the best-in-class applications for your business needs.
Let me finish by talking about one example of an application that will be running on top of this, and that is the Navidium application. So this specific application, Navidium, that we’re talking about today is an application that enables you to improve business performance in multiple areas. Those areas include things like predicting ETA – that is, estimated time of arrival. It helps you with fuel consumption prediction and route optimization where weather data is considered in those calculations—also, condition-based monitoring of the equipment as well as real-time vessel performance monitoring and several other features.
Now data can be presented on ship or onshore on dashboards for quick decision making to improve the performance of your ships. Navidium is a close partner, by the way, to Pratexo and Telenor Maritime.
So now let’s bring it all together with an actual real-world example showing how Pratexo and Telenor Maritime, and Navidium are working together to solve some critical challenges for the shipping industry. These challenges include fragmented, isolated solutions with low utilization, silent data, duplicated dark data, complexity of designing and building and deploying and managing applications, as well as high risks in achieving project success and low innovation speed.
We believe we can add value by implementing a secure, open and highly scalable, and flexible micro cloud running on each ship powered by Pratexo and Telenor Maritime. This creates a complete maritime ecosystem, enabling containerized applications to run on top of that infrastructure. Now keep in mind there could be numerous applications running on top of that micro cloud, sharing that common data stream and common compute background.
With this holistic solution, benefits include that we are breaking data silos, we are democratizing data, and letting ship owners own their own data: improved maintenance planning, fuel, and CO2 reductions. There are examples of 10 to 20% reductions in fuel consumption as well as reduced time and risk for the implementation of new features and functions. Finally, this micro cloud creates a foundation for innovation and growth.
Thank you, and over to Blaine.
Blaine: Thank you, Mats, and thank you, Knut as well. So a little bit about key takeaways.
What we’ve seen and heard is how this notion of a ship-based micro cloud or a micro cloud at sea can really help accelerate the digital transformation of Shipping and much more rapidly enable these next-generation shipping applications. Now you know, historically, you can imagine it has been challenging to implement something like a micro cloud at sea. But now, solutions like Telenor Maritime and Pratexo working together are coming to the rescue.
Enabling you to bring these applications to your fleet very rapidly and with scalability, security, and very high resiliency and – as both Mats and Knut talked about – fundamentally breaking those operational and data silos down, leading to reduced operational costs, improved efficiency, CO2 reductions, and many other benefits. These have a huge impact on the organization.
Now let us turn to the Q&A. Why don’t we start with Mats:
“Who would actually implement the Pratexo and Telenor Maritime infrastructure on a ship? Would it be the ship company employees themselves? Do they have an IT department doing this, or who would actually do the work?”
Mats No, that would typically not be the case. In this case, it would be partners like Telenor Maritime, Pratexo, etc., creating the platform and then handling that as a service so that internal IT departments could focus fully on delivering applications. It makes no sense for any organization to get involved in creating such a complex platform in a one-off type of scenario. It’s far more efficient to let other people do that, and in the shipping company, you focus on value-added services where you have your domain expertise.
Blaine: Yes, it sounds like you are saying it. I guess it’s theoretically possible that the end-user shipping company could implement the solution but more likely that Telenor Maritime and Pratexo would either do it for themselves, or maybe a systems integrator would be involved that that could help stand up the architecture very rapidly.
Mats: Yeah, let me add that there is no policy stopping that, but from the fact a practical perspective, it is just too cumbersome, I think the shipping companies will find.
Blaine: Yeah makes perfect sense to focus on their area of expertise. Why don’t I give this one to you first, Knut. So talk a little bit more about the security of the system and of this ecosystem in general and how we can ensure security.
Knut: When we have defined the uniform hosting service, we are in close cooperation with the classification societies, and those guys are setting the rules for how the system should comply within the maritime sector segment. I can say that, for example, it’s not so simple as it actually brings a lot of security issues into the arena. You can’t plug into a control system without knowing what you’re doing. So we are taking this from actually collecting the sensor data to actually delivering the data in a cyber-secure way through the platform that’s important.
Blaine: Makes sense. Mats, do you have anything to add to the security question?
Mats: Yeah, it is a very good question, by the way. Now when the IT and OT world meet (operation technology), traditionally, the OT world used to be disconnected from the IT world. But now, with the IoT trend, everything is being tied together, and that opens up a lot of new challenges. The way we are addressing that is we have some of the world-leading security specialists working for us and applying the latest research results from the academic world as well as from industry implementing the best-known security practices in multiple areas.
Blaine: Right, on. All right, another question. This is an interesting one:
“Is this just simply taking a cloud and moving it to the ship?”
Maybe I’ll take this one because I’m the one who started by introducing the concept of the micro cloud at sea.
So to some degree, you could think of it as moving the capabilities and major components of a centralized cloud and moving it onto a ship. But it’s also different. Because of the kind of applications that tend to run on centralized clouds like ERP systems, CRM systems, HR management systems – they’re not the kind of applications that require you to be running real-time machine learning algorithms ingesting massive amounts of data in real-time. Processing those events, distributing the computing across multiple compute nodes to ensure resiliency, that kind of use cases that the applications we’ve been talking about are not the kind of applications you tend to run in central clouds.
These kinds of applications are specific to running closer to the so-called edge where the data is generated on the ship. Wo while there are definitely some similarities and conceptual similarities between a central cloud and a micro cloud at sea, there are also some important differences and critical optimizations. In fact, this is again why, historically, the failure rate of POCs has been quite high in the IoT space. It’s fairly easy to take a small amount of data not really running in real-time and push it up to a standard cloud infrastructure. But once you go at scale with massive amounts of data, event streaming, machine learning, running actions taken in real-time, it’s a different thing. It’s still a cloud – it’s a micro cloud at sea – but it’s a different kind of cloud supporting a different kind of use case.
Mats: Let me just add another complicating factor is that the edge cloud (the micro cloud at sea) is not always online up to the main cloud, so the logic has to work in a very robust way that occasionally may not be online which is also another thing different from the main cloud.
Blaine: Thank you. Yeah, that is a very good point. Okay, let us do one more here and maybe “Talk a little bit more about the business model, so how does a company go about getting this thing running. Is it a service?”
So the notion of a managed service is that, fundamentally, you choose the applications you need to be able to run on your ship, and then Telenor Maritime and Pratexo will fundamentally make this architecture and infrastructure available as an ongoing managed service for the ship provider. Is that is that right, Knut?
Knut: Yeah, in the future, I see there will come a portal system that can actually support the huge amount of applications that we have shown here today.
Blaine: Yeah, a literal app store. I think there’s no doubt that is coming, and obviously, because we’re putting in place the infrastructure in the maritime industry to enable that.
Again, thank you so much for your attention today. Mats and Knut: thank you for the great presentations. Of course, anyone can find out more information about Pratexo or Telenor Maritime at pratexo.com and telenormaritime.com.