Unattended services managed remotely

services managed remotely Nowadays, devices that are operated remotely (i.e., without local staff to verify their working conditions or act in the event of an incident) offer many services to the public.Some examples are: remote ATMs, billboards, ticket vending machines, etc.

Generally speaking, companies that offer this kind of services have centralized monitoring mechanisms that ensure their smooth operation and allow operators to react whenever an issue arises (i.e., detect network connectivity problems, lack of paper when printing tickets, frozen screens, etc.).

 

For the most part, incidents can be quickly solved remotely. However, some cases require manual on-site intervention and the physical displacement of security or technical staff.

 

This is particularly true when dealing with critical services. Here, a potential problem in the software can force a highly-specialized technician to come (with all the costs this entails) when all that was needed to solve the issue was to turn the device off and on.

Incidents related to the blocking of devices, for whatever reason, in places with no access to remote management are usually solved by installing relay systems that can be remotely managed via IP and which activate/deactivate the on-off switches  on the devices.

For them to work properly, in addition to being powered and equipped with LAN connectivity ports, they must have a specific router/modem configuration that allows them to remotely access the IP relays and a management software that enables/disables them.  

Although there are plenty of cases where this solution fits perfectly, having a relay connected to the local IP network poses an inherent security problem and is not recommended in scenarios where service is critical. These situations call for full control and a total isolation of the local network.

But, how? You can achieve this selecting an advanced router, since this can be securely managed from a central communications control center. However, then you have the issue of sending the necessary commands to the customer’s critical devices (e.g., remote ATMs in the case of banks)? To do this, you need a machine that can act as an interface between the advanced router and the end device, guaranteeing the necessary isolation and control.

Teldat developed the MTC+, a device that can offer an overall solution to the aforementioned problems. Thanks to the MTC+, Teldat helps its customers save costs and maintenance time while offering critical services.

 

 

 

The network as the basis for Digital Transformation

digital transformationDigital Transformation uses technology to digitalize current business assets or to open new business avenues drawing on the advantages offered by ICT. This concept encompasses a much deeper change in organizations than we are used to as, for many, Digital Transformation has more to do with user experience, customer relations or the use of electronic selling channels. And yet this is only part of a process that impacts the whole organization.

The following example will help illustrate the rest of this blog. Let’s take a company that manufactures golf clubs and decides to offer a service (through a mobile app) to monitor their customers’ performance. Sensors, embedded in the club, collect data (strength, power, movement, impact, and direction) and map the resulting graphics to the corresponding dashboards. This business venture has it all: a new niche market and the addition of digital capacity to existing elements. In addition, it creates recurring billing, which increases customer loyalty as it’s based on continuous feedback.

Continuing with our example, let’s take a look at the infrastructure a company would need to set this up:

  • Information Systems: Let’s start with the basics. All enterprises have information system platforms to support their business operations. Their overall complexity and functionality depends (in part) on each business model and the degree of company digitization.
  • Customer Care Systems: They allow companies to interact with their customers in the areas of pre-sales, sales and after-sales. Here we include websites (and online sales), call-centers, customer support (on and off line), social networks and almost anything to increase client/company contact.  In our example, the design of the app that shows the progress made falls into this category.
  • Analytics and Big Data: Information is becoming an increasingly important tool for business success. Organizations need platforms to extract valuable information and make real-time decisions (look at the Inditex Group for example, a pioneer in this field). In the example given in this post, and generally for any company that wants to go digital, these platforms present two aspects: in-house analytics (assessing the organization from a business viewpoint) and external (analyzing user/product behavior for improvement or, user data itself – insomuch as the law allows).
  • IoT: Although IoT platforms are not too common at the transformation stage, we can foresee a spectacular growth here over the next few years. Let’s not forget that Digital Transformation has a lot to do with digitalizing existing elements, mainly by incorporating sensors to gather data to send to analyzing applications or to storage centers without staff intervention. The Internet of Things at its purest. In our example, this would be the measuring sensors added to golf clubs in order to make them “digital”.
  • Interaction with ecosystems: Increasing customer demands and the specialization these require mean that organizations frequently need to establish joint ventures to increase their competitive edge.  These third parties form the company ecosystem and are strongly integrated into intercommunications between technology and management.

The success of digitally transforming an enterprise is obvious (if nothing else, because of the increase in productivity and competitiveness it brings) and is mainly based on the sharing of information generated by these five systems in real time. For example, prior to launching a new product, an avalanche of negative comments circulating the social networks should provoke an immediate change in design and/or fabrication and orders. Or even re-orientate the product to a different market than originally envisioned, simply through analyzing user data. Any or all of these scenarios entails fast information sharing.

To make this possible, the organization communications network must guarantee full and constant connectivity between all points, regardless of location and size. A flexible yet secure network that quickly adapts to meet company needs and offers operators a simple way to manage incoming information and the tools to promptly react to changes.

Our SD-WAN solutions are designed to do just that: to respond to company demands for transport when they undergo a digital transformation (far beyond digital marketing and e-commerce).

*n light of all the foregoing, Teldat will be present at the Digital Transformation Trade Fair taking place in Madrid from 23 to 25 May 2017. Our goal is to explain to all those interested in Digital Transformation why such process begins in the network.

digital transformation

SD-WAN has been launched

SD-WANAt the end of 2016, I remember writing in my post that we were going to enter the SD-WAN era. Well, the first Quarter of 2017 has finished and we can definitely say that the SD-WAN era has been launched! It has picked up speed considerably within the market and specially in Teldat as a company. We have introduced the SD-WAN solution on the HOME page of our website. However, there is much, much more to come in the second Quarter of 2017.

Indeed, in the following months of the second Quarter, we will be communicating many more details related to our SD-WAN solution via our website and our social media sites. Apart from this there are important issues in May and June 2017 related to SD-WAN. One of these is the Digital Enterprise Show. Teldat is a Partner at this event and we will be presenting our SD-WAN solution in detail at this show.

Going back to the first Quarter, apart from SD-WAN, as normal there have been many other issues revolving around Teldat. In January and Feburary we had a great presence with our various International Kick Off meetings, including our five events within Latin America. As usual these events have been covered in our Twitter and Linkedin accounts, as well as having our photo albums covered on Flickr.

To end this first Quarter in 2017, we want to take advantage to thank all our readers for their loyalty and interest, as well as welcoming all new followers who have joined our communication channels recently. Indeed, within Teldat we keep on maintaining the level interest of our blogsite by adding new bloggers, especially from our R&D and Technical departments.

As I am sure that many of you will be taking a Spring or Easter break in the next few days, within Teldat we want to wish you all the best and look forward to being with you again in the second Quarter, which will be even more active and interesting for sure.

Why most 4×4 access points are not worth it

4x4 access When selecting a new Wi-Fi infrastructure, business customers are often faced with a wide range of devices. Customers with an interest in technology will always ask for equipment that meets the latest technical standards. After all, the aim is to invest in the right technology. It seems clear that the current 802.11ac standard has become the benchmark for everything. But what then are Wave 2, 2×2 MIMO, 3×3 MIMO, and 4×4 MIMO, and why is there a new 802.11ad standard.

But let’s start at the beginning: 802.11ad is a standard for networks operating in the 60 GHz band and has a range of only a few meters. It is typically used with home entertainment equipment to connect media players to TV stations. Hence, 802.11ad is not suitable for connecting laptops, smart phones or tablets.

As for the 802.11ac standard, it is important to note that it only works on the 5 GHz band and that older devices will often only support the 2.4 GHz frequency band. Luckily, most access points using 802.11ac usually contain a second radio module allowing them to offer the 2.4 GHz frequency band. This ensures that older devices can continue to operate. Older Wi-Fi clients that already use the 5 GHz band but that have not yet incorporated the new 802.11ac standard will also be able to connect because 802.11ac is backwards compatible with the rest of the 5 GHz standards.

Having said that, we still need to consider the issue of MIMO. MIMO indicates the number of transmit and receive antennas. The number of transmit antennas also determines the number of streams. The more streams an access point has, the greater the transfer rate. A 2×2 MIMO device can transmit 867 Mbit/s, while a 4×4 MIMO device reaches 1.7 Gbit/s. In order to obtain these levels, the wireless client must have the same MIMO. If a 4×4 MIMO access point is used for a wireless client with a 2×2 MIMO device, the maximum data rate will only be 869 Mbit/s. The bad news is that most laptops and tablets only come with 2×2 MIMO technology, and most smart phones have to settle for 1×1 MIMO. Therefore, a smart phone can only achieve a maximum of 433 Mbit/s.

In view of the above, there’s not much sense in using 3×3 or 4×4 MIMO as most Wi-Fi clients do not have devices that support these technologies. A 3×3 MIMO access point, or even better, a 4×4 MIMO one, is only advisable for Wave 2 models. Wave 2 represents the second wave of 802.11ac chipsets and supports MU-MIMO (Multi-user MIMO). The importance of MU-MIMO can be easily explained with the help of an example. If we have two smart phones that only support 1×1 MIMO and they connect to a 2×2 access point, the two mobile phones will connect to the first antenna. Therefore, both smart phones will share one stream. Each device will receive only half of 433 Mbit/s, that is, 216 Mbit/s. The second antenna, and consequently the second stream will remain unused.

When using a 2×2 MIMO access point with MU-MIMO technology, the process is different. The first mobile phone will connect to the first antenna and the second to the access point’s second antenna. The overall data rate will be doubled, as will the maximum number of clients that connect to the access point. Therefore, 3×3 and 4×4 MIMO only make sense if they support MU-MIMO. And there is a catch. Not only access points have to support MU-MIMO technology, but also clients. Unfortunately, this is not always the case, even with newer smart phones. Nevertheless, it is only a matter of time.

Companies that choose to install a Wi-Fi infrastructure should consider expenses. 4×4 MU-MIMO Access points are much more expensive than 2×2 MIMO ones and need a more powerful PoE switch. In addition, they need an Ethernet connection of over 1 Gbit/s to reach their full potential, which also increases the cost of the investment.

In short, I would tell anyone wanting to invest in a new infrastructure that 2×2 MIMO access points cover most business needs. Investing in more expensive 4×4 MIMO technology is only advisable if MU-MIMO-compatible access points are installed. Moreover, it is important to check whether using 4×4 MU-MIMO access points is really necessary. It could be useful for high-performance applications designed to provide Wi-Fi to a large number of clients (e.g., in conference centers). However, for most office applications, the much more economical 2×2 access points are more than enough.

The near future of Wireless technology : Wi-Fi 802.11ax

wireless technologyEach time we download an application, browse the Internet, read an email or watch something on YouTube on our smartphones, we’re using some type of wireless technology. For mobile networks this means 3G or 4G LTE. However, when it comes to our home or work environment, we’re probably using Wi-Fi.

From 802.11a, endorsed in the 90s (reaching speeds of up to 54 Mbps over 5 GHz radio waves) to the current range of 802.11ac routers (up to 1.3 Gbps over a 5 GHz band and 450 Mbps over 2.4 GHz), many things have changed.

But what does the future hold?

802.11ax, also known as High Efficiency Wireless, is the next wireless communications standard in the IEEE 802.11 range.

This technology is designed to increase overall data rates, especially in high density WLAN user areas where there are multiple access deployment points in close proximity.

While 802.11ax is still in the early stages of development, it’s bringing new exciting features such as multi-antenna capacity. If 802.11ac multiplied this capability compared to 802.11n, the use of MIMO-OFDM (Orthogonal Frequency Division Multiple Access) enables 802.11ax to further subdivide signals and multiply this capability by up to five times.

The implementation of this new version isn’t just an advance in speed. The 802.11ax Wi-Fi specifically addresses the problems of overcrowded Wi-Fi areas and enhances not only speed but the capacity to keep connections active despite strong interference. Performance promises to be four times better per user in scenarios such as train stations, airports and stadiums.

Moreover, as this uses Dynamic CCA, OFDMA and other advanced multi-antenna techniques, it results in an enhanced performance and system. Mechanisms to coexist with other wireless networks are needed, which operate in the same space with authorized devices. As it’s fully backwards compatible, it supports devices that still use IEEE 802.11 PHY/MAC. Lastly, but not least, is the superior battery life thanks to better energy administration.

This new technology is expected to become fully certified in 2019 and will offer the following key characteristics:

  • Better traffic flow and channel access.
  • With downlink and uplink through OFDM and MU-MINO, it specifically addresses multi-users.
  • OFDM FFT will quadruple at the same time as achieving far less separation (up to four times) between subsystems. This delivers improved robustness and performance for multiple paths and outdoor scenarios.
  • Data speed and bandwidth are similar to al 802.11ac with the exception of MCS 10 & 11 with 1024-QAM.

Teldat is permanently on the lookout for emerging WLAN technologies as not only do we manufacture access points, but all our routers are already equipped with embedded WLAN.

 

 

Developing cloud service solutions – Part 2

service cloudLast week, we analyzed different business platform solutions and their features. Today, we will delve into the trends that are forcing applications architecture to evolve, such as the emergence of MVC frontend architectures and microservices (both based on the modularization of components). These design proposals help make the technology in which they are implemented more flexible and scalable, as well as to reuse the development processes employed.

Factoring an application into different components is not a novel idea. It is at the very core of object design, software abstraction and componentization. Currently, this factoring tends to adopt the form of classes and interfaces between shared libraries and technology levels. What has changed in the past few years is that developers, driven by companies, now create cloud distributed applications.

MVC architecture and frontend framework

MVC architecture separates data and the business logic of an application, offering elasticity, portability and interoperability between components (which proves really useful for content management collaboration in cloud services). This software architectural pattern is based on code-recycling ideas and concept separation, features that aim to facilitate the development of applications and their future maintenance.

The emergence and great success of MVC architecture in frontend development is an example of implementation in developments that reflects stack maintenance. In the past few years, we have experienced a rise of new MVC frameworks mostly aimed towards frontend development. This boom is the answer to the implementation of a logic and a design organization that, before, was only featured in the backend. That is to say:

a) We download part of the logic that used to be stored in the client’s server and favor the integration of other applications that consume said services.

b) Implementation is planned around component modulation. This allows for completely scalable solutions and improves their maintenance.

Two of the biggest companies according to the NASDAQ technological index (Google and Facebook) are letting their proposals for MVC frontend methodologies battle it out: Angular.js and React. This clearly shows the how important these types of web service implementations are nowadays.

Microservice architecture and backend framework

Microservices allow to build systems for short-scale services, each one in its own process, communicating through light protocols. Normally, a minimum number of services manages common things for all others (like database access). Each microservice corresponds to an app business area.

In addition, each of them is independent from the rest (meaning their codes can be unfolded without affecting the others). They can even be written in a different programming language.

When compared to monolithic approaches, an advantage of microservices is that they can be deployed independently. In other words, a change in the inventory module will not affect the others, business logic is well separated, it is simple and improves scalability. It also helps manage multifunctional and autonomous teams. In addition, we can form multifunctional teams that handle several microservices (escalating the development process in a simpler way).

This architecture poses a series of challenges (such as its automatic deployment) since it introduces a series of complex factors that need to be managed in distributed systems: errors, data consistency, test strategies, etc.

In Teldat, we are developing scalable solutions for our clouds management and administration platform (based on the MVC frontend model), as well as planning the implementation of microservices.

Developing cloud service solutions: Part 1

cloud administrationAs of today, the solutions offered in business platforms are based on the development and exploitation of scalability features and cost-saving actions by means of platform virtualization using effective technologies designed to modularize their components.

It may sound ironic, but the cloud concept is dominated by the desire to centralize services that are part of the decentralized worldwide network (i.e., the Internet). This notion covers a wide range of services often described as a stack, given the number of implementations built on top of one another. As a result, certain development scenarios emerge where scalability, flexibility and a continued technical implementation are key aspects for the future development of such services.

Depending on their layer of implementation, we can sort cloud services into three different categories:

a) Software as a service (SaaS): found in the top layer, these are comprehensive applications offered as a service over the Internet. Their greatest benefit relies on the fact that they are universally accessible through the Internet.

b) Platform as a service (PaaS): the idea behind these applications is the same applied to SaaS but the provider offers the “middleware” to the customer (i.e., it is the encapsulation of a development environment and the packaging of a series of modules and plug-ins that provide a horizontal functionality).

c) Infrastructure as a service (IaaS): any infrastructure that is provisioned and managed through the Internet, granting access to virtualized components. They allow for the vertical reduction and scaling of resources.

Chrome OS, a Google-developed operating system widely used in micro-laptops (especially in the US) that universalizes access to services, is a good SaaS case study example. It is designed to work in the cloud and uses the browser as main interface.

In the SaaS and PaaS layers, the supplier provides scalability as part of the service pack. The evolution in the implementation of MVC architectures is closely linked to the solutions proposed for these scenarios, which have to do with virtualization possibilities at the IaaS level.

The virtualization of platforms linked to domestic consoles is being used to monitor technical developments and stretch their marketable service lives through the pre-processing of certain effects that would otherwise require a vast allocation of resources. Another example would be that of Folding@home, a distributed computing project where resources belonging to PlayStation3 devices are used to carry out simulations on molecular dynamics for the medical field.

The scientific world is making the most of universal services implemented without the need of a physical infrastructure. The IBM Quantum Experience project, which gives all devices universal access to a quantum computer, is a worthy example. It works with a five-quantum-bits processor (the latest development in quantum architecture) and may escalate to bigger systems.

In next week’s post, we shall cover development solutions for the implementation of software architecture modules based on the modularization of components to exploit scalability.

Improving Efficiencies Through PLC PRIME Communications Gateway

prime technologyTraditional PLC-based smart metering solutions require the installation of PLC data concentrators in remote and unattended second tier substations. PLC Data concentrators handle communications with the different smart meters installed at consumer premises, consolidate the metering data and send it to the AMI management systems.

This option involves storing confidential metering data belonging to consumers at remote and unattended locations, forcing communication devices to optimize data transmission to centralized management systems. Thus, interoperability between management systems and data concentrators in the field is a must.

Private cloud for smart metering would reduce on-field infrastructure complexity, minimizing data storage at remote locations while optimizing the operation of centralized management systems relaying on a reliable communication infrastructure.

On-field infrastructure is minimized by using a unique device per substation, which acts as a gateway between the PRIME PLC network and the IP networks. Therefore, considering that most current deployments involve the combination of PLC concentrators and a communication gateway, the number of on-field devices decreases considerably. Moreover, this new architecture has a very positive side-effect as it results on a significant security improvement due to the disappearance of sensitive data stored at the secondary substations. Sensitive data stored in the concentrators is not stored in the gateway.

Cloud-based and virtual systems are becoming more and more popular thanks to their maintenance efficiencies and scalability advantages. PLC network virtualization leverages the following advantages:

Improving maintenance and operation tasks: Current smart metering deployments involve multiple meter manufacturers and multiple concentrator vendors. Although interoperability among them should be taken for granted due to DLMS/COSEM standardization, different vendors may implement the standards in different ways (having an impact on deployments and troubleshooting operations). Current deployments require coordination of multiple actions from many different vendors. Having a unique concentration point that adapts to every meter in the network simplifies interoperability certification, troubleshooting and corrections at the DLMS layer.

  • Improving Reliability: Cloud-based software solutions allow replication of servers in the power utilities core network to provide redundancy and high-availability of service.
  • Improving Security: Core metering infrastructure can be strictly secured by different DMZs, advanced firewalls and secure databases.
  • Reducing upgrading cost: Multiple AMI management systems can obtain information from this central software unit. If newer standard versions or data modelling are required by newer management systems, modifications are to be made in a unique central system instead of in multiple field devices (which might have memory or throughput limitations for those new features).

Considering the above-mentioned advantages, electric utilities are considering the use of PRIME gateways as a metering solution for deployment in rural areas. There, secondary substations are typically pole-mounted and concentrate a reduced number of meters.

Traditional data concentrators can only be installed in the Secondary substations connected to the MV grid with all the installation constraints related to this requirement.

It is, therefore, important to mention the added versatility and ease of installation the PLC PRIME Communications Gateway brings, as it can be installed not only at the MV grid but also at any point of the LV grid.

Teldat is bringing its more than 30 years of experience in complex communication networks to this new concept. For that reason, Teldat has recently launched a new REGESTA COMPACT PLC family of devices to cover the specific needs of Smart Metering deployments.

 

 

How much has business communications changed?

SD-WANAmazingly enough it’s only been ten years since the first iPhone was released. To look back on that first version is a fascinating experience and makes you realize just how far smartphones have come.

Ten years ago, the Apple store did not even exist! The only applications around were those the manufacturer installed. The cameras were nothing special either (all of 2 megapixels) and were limited to taking photos. As for communications, the limitations were greater. Network connectivity was based on poor 2G/Edge technology (speed wasn’t as essential then) for the application, which wasn’t even able to attach a photo to a message. Today we complain about the short battery life of a smartphone, however this was nothing compared to the technology of a decade ago when Apple suggested their customers should disable GPS or WiFi when unessential, as these features drained batteries leaving just a few short hours of actual use. The iPhone was however, an enormous step forward, almost killing off the then market leaders (Blackberry, Palm and Nokia) and Android was yet to make its appearance.

The technological turmoil we are living today sometimes leads us to trivialize the incredible changes going on around us, but it’s clear that over this past decade, personal communications have radically changed. Now what? Has the same thing happened to business communications?  Logic tells us that company communications would evolve at a similar rate, however this is not the case. Ten years ago, enterprise communications were mainly based on MPLS networks, technology developed at the end of the last century and still in use today.  Obviously, the transport mechanism has changed, ASDL to ADSL2, ADSL2+, VDSL and VDSL2 and more recently optic fiber (incrementing speed), however the underlying network is still unchanged, the processes for provision, management and operations practically unaltered. So given the incredible evolution in information technology, this is really quite baffling.

Present day technology means business communications can use internet lines, which are reasonably secure and reliable, make routing decisions based on applications that generate traffic (rather than technology based on IP addresses) and align network operations with business demands. Current technology allows full automation of office installations (without the presence of minimally qualified technicians) sidestepping the need to know exactly how each application uses the network at every point. Today, network behavior can be fully and easily modified in minutes (without involving a management center that, given its enormous inertia, typically needs weeks to implement significant changes).  The scope for growth is enormous and the winds of change bringing “SD-WAN” to the fore are beginning to blow. Teldat is fully behind these changes with a goal to offer a solution capable of gradually transitioning, in order to help enterprises minimize the risks and impacts a radical change may bring to a business asset as necessary as their communication networks.

 

Low Power Wide Area (LPWA) networks – a huge impulse for IoT

LPWAThe Internet of Things (IoT) is a concept which has been with us now for many years and it is slowly gaining terrain during the last few years, but what will for sure be a huge impulse for IoT are the networks classified as Low Power Wide Area networks (LPWA).

Why is this so? Mainly because IoT applications and devices, to be economically viable need to have low costs and long battery lives, and LPWA can offer this. Indeed, initial research suggests that there will be between 5 to 7 billion LPWA connections by 2022!

Apart from low costs and long battery lives, LPWA will increase the parts of the IoT industry that require low data rates, low mobility, hard to reach locations, low level of power consumption, a long range and also security. This is something which no matter which way you look at it, existing mobile technology is not ideal for the above scenarios. Hence this makes LPWA more feasible.

However, what is true, is that existing cellular operators are prime candidates to be the ones to offer LPWA, because they don’t need to make large changes in their existing infrastructure. Initially the cellular operators would just need make enhancements to their current networks. Moreover, coverage of these cellular networks to date, virtually cover the whole globe and roaming permit country frontiers to be crossed without any problems. Also, there tends to be various cellular operators in each country which means there is competition, and in turn this helps to keep pricing down.   

Industries that have requirements mentioned at the beginning of this article are many. To name but a few; agriculture, utilities, health, automotive, transport, manufacturing, wearables and more.

Utilities: all utility companies need to meter and monitor low levels of data on a periodic basis. Whether to measure client consumption or as a backup system to detect faults, leaks, etc. LPWA could also be used at the energy production plants.

Smart Cities: Smart cities are not only about the utilities industry, but much more when we consider LPWA. This technology can be implemented into many public services. From important services which need to have a tight control, such as street lighting, local police, sewers, etc. to public services from which city councils can obtain revenues from. Parking, bicycle hiring, central city areas with levy tolls, etc., to name but a few.

Manufacturing: In the same way, backup systems can be used with LPWA for the manufacturing industry, to detect faults along any parts of an assembly line or any in warehouses and in other places. Even to monitor procedures to maintain everything at optimum levels.

Buildings: LPWA  can be integrated into both private and business buildings. For example, controlling heat and light disorders or controlling machines themselves. Within a home that could be the temperature of a fridge and within a business that could be controlling the ink level of printers.

Health: Health can initially use LPWA for two basic areas. For patients and secondly for all hospital infrastructure. Having patients at home is becoming increasingly popular because on the one hand it reduces costs drastically and secondly because patients tend to improve faster at home. Using LPWA on patients to monitor blood pressures, oxygen levels, etc. is vital to be able to send patients home early. As far as controlling infrastructure, Hospital buildings would be prime candidates for everything related to smart buildings.  

Agriculture: LPWA networks will enable to keep track of live animals. Whether it’s livestock or even wild animals to detect their whereabouts. Also, soil can be monitored to always have optimum humidity levels.

Transport: Independently of tracking the vehicles, tracking of transported goods is not currently online. It is mostly done with barcodes as they pass through the different phases of their travel. However, LPWA could have packages monitored at whatever time desired.

Wearables: With LPWA, children and old aged people could wear simple devices to keep them tracked down and under control so that they don’t stray away from desired areas.

Overall it is clear that LPWA is going to boost IoT and that’s the sensation one gets from all those involved in this industry. Mobile operators, infrastructure companies, device/module/chipset manufacturers and integrators. Within Teldat we have been manufacturing mobile routers and devices for nearly twenty years, and we are keeping close control of LPWA, as we have with other mobile technology in the past.