The telecommunications market continues to make progress in 2016

telecommunications marketAs we are past the half year mark for 2016, we have decided to take the opportunity to recap on what this year has meant so far for our industry.

The telecommunications market and technology are forging ahead even faster and on several fronts. A number of different factors may have had an impact on this acceleration of progress, including the new digital transformation, the incorporation of Big Data, the conversion to SD-WAN technology, or network providers and the deployment of new competitive services.

For whatever reason, the telecommunications market is evolving smoothly at present and everything appears to indicate that it will continue to develop at an ever increasing rate.

Teldat blog; the latest telecommunications industry news

We have always made every effort to be at the forefront of our sector. That means following closely the developments in communication technologies and carrying out continuous research, innovation and development of our products. Some time ago we decided to take a step further and share our wealth of knowledge and experience as a telecommunication company. Hence, the Teldat blog was born.

From the blog’s beginnings until now, we have posted interesting entries about a variety of topics related to corporate communications.

During 2016, we have continued to work on our blog, where bloggers known to Teldat have posted their articles. But we have also added some new highly motivated bloggers who add value to our communication efforts. Thanks to the large number of participating authors, we can cover highly diverse subjects, and we do so from very different perspectives which vary depending on each technician’s field of specialization.

Some of our readers may take a break over the new few weeks (or already have). But there is one thing for sure. From Teldat, we wish to thank our readers for their interest and loyalty. When you come back, we’ll be ready, reporting to our customers and followers all the latest news on a highly dynamic and constantly changing market, the telecommunications market.

 

 

 

ATM Security

atm securityBanks are currently one of the primary targets of criminals; quick access to cash or personal bank account information is a juicy haul. Automated teller machines (ATMs) are a security weak point and while bank-located machines usually have cameras and other security measures in place, off-site ATMs installed independently don’t have the same kind of infrastructure. There are plenty of articles on the Internet about ATM skimming, which is when a thief attaches an external device to an ATM to capture a card’s electronic data, including the PIN, in order to recreate an exact copy of the card. See this link to read an article from the North American press  on ATM skimming.

It is in this context that we need to provide remote management mechanisms that ensure thieves can’t gain access to confidential information, a bank’s network or impersonate an ATM.   While access control mechanisms, authentication, firewall ports, etc., can be used preventatively, a thief might still be able to gain physical access to an ATM.  If there is no way to remotely block the machine, an attacker may have sufficient time before the police or security services can arrive, especially in remote areas. Here there are two very effective mechanisms to physically control the status of the ATM from a general network center:

  • Access to a device that disables the ATM by turning off the power to the machine. By running a command in the communications device you can control devices that physically block the power and turn off the ATM or any connected devices, thus preventing attackers from operating or running operations that depend on electricity.
  • If thieves cut the physical communication cables to prevent remote access to the communications device, you can still connect to the device using wireless WAN backup. Thus, although you can’t communicate any data, if the SIM is active you can send commands via SMS that turn the ATM’s power on or off. You also benefit from a dual security mechanism, since precautions need to be taken to avoid just any number accessing the device’s controls, thus only one or a few numbers must be enabled to prevent unauthorized access.

At Teldat we have both preventative and reactive security measures in place on thousands of ATMs worldwide providing full control over the devices at all times, whether by fixed-line or mobile.

 

What’s the difference between a router and a PC?

482253443How does a personal computer (PC) differ from a router?

Many people believe they have completely different electronic systems, but this is not entirely accurate. While it’s true to say there are quite a few differences, they also have a series of characteristics in common.

First of all, routers fall into the so-called embedded systems category. “What is an embedded system?”, you may ask. Well, it’s simply a computer system designed to carry out a limited number of tasks, meaning their hardware and software is far more specific than that of a PC. Other examples of such systems are printers, GPS navigation systems or DVD players.

Both PCs and routers are computer systems equipped with bootware stored in a non-volatile memory (usually FLASH or EEPROM – Electrically Erasable Programmable Read -Only Memory) used to initialize hardware and boot the operating system. Let’s take a closer look at how this bootware works in the different systems.

PCs first execute a part of the software known as BIOS (Basic Input/Output System). Its main task is to boot the hardware, carry out the POST (Power On Self Test), which basically checks the hardware is in perfect condition, and load the bootstrap (boot manager) to upload the operating system into the memory.  BIOS can also serve as a layer between the operating system and the hardware.

router embebidoRouters, on the other hand, use embedded bootloaders. This software boots the processor (and surrounding devices) and loads the operating system in the memory. The latter is normally found compressed, together with the bootload, in flash. Aside from this, typical PC features (such as Power On Self Tests) are also added. The main advantage of embedded bootloaders is they occupy little space and boot far more quickly than the average BIOS (PC). Teldat equips each of their routers with specifically and individually designed embedded bootloaders ensuring their devices boot as quickly as possible.

Broadly speaking, PCs and routers are, at their simplest, two computer systems. Routers, however, are more feature-specific, their soft and hardware being designed to optimize certain characteristics. PCs, on the other hand, are just unable to carry out router functions despite their similarities.

The path has been cleared to make way for DRAM

dram memoryOur analysis of the evolution of memory begins in the dynamic memory era, that is, with dynamic random access memory (DRAM).

Without going into technology specifics such as the structure of a memory cell, the distinguishing characteristics of DRAM versus SRAM (static RAM) are basically twofold: (1) the full address is usually presented to SRAM just once, while it is multiplexed to DRAM, first the row and then the column; (2) DRAM also needs to be refreshed periodically to maintain the integrity of stored data.

This memory family kicks off with Fast Page Mode (FPM) DRAM.  In the early days, 5V technology and asynchronous memories were used as these memory devices did not require a clock signal input to synchronize commands and I/O. Data access time, from the moment that the memory controller (located in the CPU or chipset) supplied the row address, was around 35 ns, and 13 ns for the column address.  Right from the first implementations, once the row address had been supplied, it was possible to vary the column so as to have the data arriving every 13 ns. A further improvement came in 1995 in the form of extended data output (EDO) DRAM, which simply held the read data stable until the falling edge of CAS# in the next cycle, rather than putting them into high impedance at the rising edge of CAS#. With this precharge time (tCP) was gained, which allowed the bursts to be shortened from X-3-3-3 cycles of the front side bus (FSB) to X-2-2-2 cycles. This simple improvement enabled a ten percent increase in performance while maintaining the price. It was the Pentium era with a 133-200 MHz internal clock and 66 MHz FSB. 

The next step for the technology is synchronous DRAM.  Among the changes introduced with this type of memory we have:  (1) among the signals reaching the device is a 100-133 MHz (PC100 and PC133) clock signal (hence the name), (2) the power supply voltage is reduced  to +3.3V, marking the beginning of a continuous reduction, (3) signaling is LVTTL, (4) read and write access will be burst-oriented with the burst length, and other operating parameters, programmed during initialization[i], and (5) organized into four internal banks.  As in the case of FPM and EDO, accesses begin with the registration of an ACTIVATE command, which is then followed by a READ or WRITE command. The address bits registered at the same time as the ACTIVATE command are used to select the bank and row to be accessed. The address bits registered at the same time as the READ or WRITE command are used to select the bank and the starting column location for the burst access. Access time from row activation is 30 ns (tRCD + CL x tCK) and 15 ns (CL x tCK) from registration of the READ command until the first burst beat becomes available for the PC133 specification; the following three beats arrived at a rate of one per clock cycle: X-1-1-1. And now, on top of this, once a row in the bank is open, any column in that row can be accessed without having to wait for the row to be reopened; the burst under these conditions is: 2-1-1-1 compared with 4-1-1-1. However, by far the biggest advance over previous EDOs has had more to do with the possibility of initiating a second access in another bank while the previous one is still in progress, than to do with latency. Thus, bursts could be juxtaposed: X-1-1-1-1-1-1-1 compared with X-2-2-2-X-2-2-2, and at the same time the clock frequency increases from 66 to 133 MHz.  By the year 2000, this technology had completely replaced the former EDO.

The next improvement came in the form of Double Data Rate (DDR): (1) the power supply voltage is reduced to +2.5V, (2) signaling is now SSTL and continues to be so right through DDR3, (3) size increases to 1 Gb, (4) the clock becomes differential, (5) each byte/nibble is accompanied by a co-directional data strobe (DQS) used as a clock to capture data using, and hence the name, (6) both edges. This innovation allows us to double the amount of information transferred in each clock cycle. The voltage reduction and other improvements allow the clock frequency to increase to 167 MHz (although there were 200 MHz versions powered at +2.6V).  Although the access time from the row and the column to the first data beat is 30 ns and 15 ns, respectively, subsequent beats are received every 3 ns (tCK /2 @ 167 MHz). Thus, all of the information contained in a burst is received in 42 ns, or 24 ns if the row is already open.  

The evolution continues with DDR2: (1) the power supply voltage is reduced to +1.8V, (2) size increases to 2 Gb[ii], (3) the number of banks is doubled to eight, (4) DQSs become differential, and (5) dynamically activated on-die termination resistance (ODT) is included in data lines to improve signal integrity. The clock frequency increases to 533 MHz. Row and column access times to the first burst data beat vary little, being 26.25 ns and 13.125 ns respectively, but subsequent ones have far lower latency (0.94 ns). The entire 4-beat burst is transferred in 30 ns from row activation which is the worst case.

The next generation, and we are now nearing the present, is DDR3(L): (1) the power supply voltage is reduced to +1.5V, and even +1.35V in the low-power version, (2) capacity ranges from 4 Gb (+1.5V) to 8 Gb (+1.35V), (3) the number of banks remains the same, (5) the number of ODT termination values increases from three to five, (6) you can choose between two different memory driver strengths and, most importantly, (7) the bus routing paradigm between the DRAM and the memory controller changes. We pass from the symmetrical tree-type topology for command/address/control signals and static skew control between them and the data bus, to a “fly-by” topology for command/address/control and clock (CK) lines and de-skewing the DQS strobe to clock (CK) relationship at the DRAM, through a process that the controller, aided by the memory, must implement during the so-called Write Leveling initialization phase. This is when the controller adjusts each byte’s DQS strobe displacement in submultiples of the clock period until it is aligned with the clock signal. During each step of the process, DDR3 memory samples the clock signal at the rising edge of the DQS, returning the value at the least significant bit in the octet/nibble. The process ends when the controller receives a CK transition event from 0 to 1. The corresponding delay represents the value which de-skews the trace length mismatch between ADD/CMD/CTL/CK and the corresponding octet/nibble. The new topology allows us to double the operating frequency to 1066 MHz so that row and column access times of the first data beat are 13.09 ns and 13.13 ns, respectively, while the latency to the following one is reduced to 0.469 ns. Thus the burst transfer takes 28.1 ns from the row and 15 ns from the column.

Finally we come to the last step in the DRAM evolution, DDR4: (1) the power supply voltage is reduced once again to +1.2V, (2) signaling changes to POD, (3) capacity increases to 16 Gb, (4) the number of banks is doubled to 16[iii], (5) frequency is increased to 1600 MHz, and as a result, (6) we see an increase in the ODT values (with up to seven possible values). Performance increases proportionally with the increase in clock frequency.

In view of the calculated access times, which are always around 30 ns from the row and 15 ns from the column, you might be mistaken for thinking that performance has failed to increase significantly over time. However, such a perception does not do justice to reality since the controller usually maintains several active banks (up to sixteen with DDR4) so that while we still have the aforementioned latency, the controller can schedule the accesses so the bursts are placed back-to-back achieving a throughput that is two orders of magnitude higher than FPM and EDO.  Let’s take an example: suppose the program flow requires the activation of one row  followed by another and so on, and that, as a result, the controller activates the first row in the N cycle, the next one in the N+2 cycle and so on. Well, if we were using the DDR4-3200 we would have the first data available in the N+44 cycle, the second in the N+44.5, the third in N+45 and the fourth and last of the first burst in N+45.5. The first one corresponding to the N+2 activation would appear in N+46, the second in N+46.5 and so on. As you can see, the throughput is one data beat every 0.5 x tCK with tCK being the inverse of 1600 MHz (625 ps),  which expressed in transfers per second is 3200 MT/s compared to the 22 and 33 MT/s data rates obtained with FPM and EDO, respectively.

Teldat devices haven’t remained outside this evolution. The N+ used FPM DRAM to operate at 33 MHz; the ATLAS200, ATLAS 250 and ATLAS 150 used SDR SDRAM to operate at 50 MHz, 66 MHz and 100 MHz, respectively, depending on the version; the ATLAS160 and ATLAS360 use DDR2 at 200 MHz (400 MT/s) and 266 MHz (533 MT/s); the ATLAS6x inaugurated the use of DDR3 at 333 MHz (666 MT/s) and the more modern iM8 and i70 routers use the latter memory type to achieve 1600 MT/s transfers.


[i] Burst lengths of 4 are considered in the text.

[ii] Only parts of a single DIE are considered.

[iii] The sixteen banks are actually organized into four groups of four banks. New temporary restrictions relating to being part of a group or bank have implications for controller design.

 

Brand internationalization

Brand internationalizationIn a highly dynamic technological sector led by powerful multinationals that put considerable emphasis on branding, international presence and acceptance of any kind for small and medium-sized businesses is both noteworthy and appreciated.

At the same time, staying and even expanding in this market is an exciting challenge. Responding to this challenge requires adapting and even reinventing certain aspects of a business; some of them more obvious, like brand image or creating a powerful international network of offices and business partners to provide commercial services and local technical support. Furthermore, the impact on the product is quite clear, from product definition to marketing, through design, manufacturing and other product-related aspects. And it is on this topic that I wish to share with you certain aspects relating to brand internationalization.

First, starting with the product name itself, we need to think globally, keeping a watchful eye on legal cases like the one that forced “Dunkin’ Donuts” to change its name in Spain to “Dunkin’ Coffee” because Panrico had already registered the word “Donut”. The phonetics of different languages are another thing to watch and there are a number of funny anecdotes mainly in relation to car models; there’s the “Nissan Moco” or the “Mazda Laputa”, which failed to sell in Spain, and the “Fiat Marea” or the “Mitsubishi Pajero”, which ended up being called “Montero”.

Moving on to less humorous topics, I should mention the certification requirements of different countries for marketing and imports.  First, there are the internationally recognized CE, FCC and UL certifications carrying their own non-insignificant costs,  especially for devices incorporating radio communications (WLAN, WWAN) since they require testing in different bands and band combinations. Any slight component change or minor device modification will invalidate these certifications taking you right back to square one. Next, some countries have their own standards and certification requirements that require local tests and often duplicate certification efforts. And as if that were not enough, ensuring product compliance in some of these countries lies with the importer, with all the obstacles that implies. It is also quite common for import permits to have expiry dates, sometimes as little as six months, which in most cases requires a simple administrative process to renew the permit and a mandatory fee of course.

And I mustn’t neglect to mention the accessories. In the European Union, manufacturers and importers, for goods imported from developing countries, are responsible for ensuring product compliance. This is because according to regulation 768/2008/EC, non-EU manufacturers are excluded from all liability for a good; the liability falls on the importer (who is easier to pursue legally…). Having said that, according to the same regulation, everything is relative, and under certain marketing conditions a third party marketing goods under certain circumstances may end up being held liable.

The common legislative framework of the European Community clearly acts as a homogenizing element facilitating and driving the market.  But while affixing a CE marking to a product provides a guarantee of product safety and reliability, not everything that glitters is gold! Watch out for a strikingly similar “CE” marking that stands for China Export and only means that the product was manufactured in China. 

logo eu

The international presence of Teldat, which in 2015 alone helped the companies of fifty-five countries on five continents with their communications, bears witness to the efforts of this Spanish company in the process of brand expansion and international product and brand recognition. The company’s manufacturing experience, together with a tightly woven network of partners fully committed to customer satisfaction, ensures efficient communication solutions wherever customers choose to trust their communications to Teldat.

 

Digital transformation. Digital connectivity

digital transformationA couple of weeks ago, Madrid was the venue for what has been heralded as Europe’s largest digital transformation trade show. Leading multinationals and the smallest of startup companies took the opportunity to showcase their products and services related to this new technological revolution that aims to change the way we live and work… Or maybe not?

As usual, everything depends on how you define digital transformation and on how companies interpret the advantages and disadvantages that a digital transformation (in varying degrees) can bring to their business model.

Broadly speaking, digital transformation is about using new technologies to improve business. If we analyze the two words that make up the concept, a digital transformation is clearly, above all else, a transformation of sorts. That is, it is a change in the way things are done.

“User experience” enhancement, around which the majority of exhibitor presentations, demos, products and services revolved, was the theme on practically every stand and presentation at the event. And rightly so, because the consumer (or user) stands at the center of all processes undertaken by companies to digitally transform themselves. For two reasons:

1.      Cloud services and simplifying user interfaces and applications (thanks to Apple and other companies) have been behind the first phase of the digital transformation. This has allowed consumers to perform the tasks and jobs previously done by companies, and is called consumerization. New demand is generated for a different set of products and services that companies must find new ways of satisfying, leading them to transform their business model, processes and activities.

2.      Thanks to new technology, the method of reaching the user has changed dramatically in recent years. To start with, user access is more direct and less complicated, which has also led to a transformation of the distribution channels. A clear example are the retail chains and the problems they have in maintaining the level of store access against the push in online sales. In the end, the users or customers will be the ones that decide how the companies they buy from should act. Both for B2C companies and for B2B companies, which end up being B2B2C. 

But this is only part of the digital transformation process. The really revolutionary and disruptive fundamentals of these processes are “what we can’t see”. The inner part, the process, procedures, operations and maintenance that can mean huge savings and improved efficiency for businesses. As we have seen, some are imposed by consumer demands. Others are to be found among the infinite possibilities offered by the four technological pillars of any digital transformation process: mobility, cloud, the Internet of things and data networks, on which the whole structure holding these increasingly critical processes ultimately rests.

A simple example: Any large distribution chain that uses RFID tags on its products, that needs to equip its carriers with  tablet applications to monitor deliveries and delivery notes, and that stores its business data on a distributed cloud so that it can perform big data analytics to improve its business processes. None of these actions has a direct or significant impact on the customer, yet all of them have two things in common: they significantly improve internal efficiency, control and knowledge, and they employ different models of use of the data networks.

And this is where the data network becomes central to digital transformation processes. In recent years, the network’s purpose has been to provide reliable continuous bandwidth to support corporate data traffic.

In the current environment, networks must dynamically adapt to cater for specific needs across a company, providing distributed connectivity that is practically tailored to the different business areas. With regard to the operational departments, networks should be very simple to set up (regardless of how complex they are) and provide full control, in terms of both management and the total cost of owning and operating them.

This is the digital connectivity that digital transformation needs. The challenge to which the industry must respond. A challenge that we at Teldat are more than prepared to take on, with our SD-WAN solutions that not only solve the digital connectivity requirements of IT departments within organizations, but that can also be integrated into carrier business and supply models to create an integrated corporate communications platform, tailored to the transformation processes.

 

 

The divine spark – Wireless LAN is in the air

godspotsWireless LAN has been conquering the market for many years by now and still does. This is no news for us. In the business world, Wi-Fi plays a vital part in processes and also in our everyday life hardly anybody can image to living without wireless LAN.

We are used to having Internet access almost anywhere for our mobile devices such as tablets and smart phones. In Germany, free Wi-Fi will soon be offered even in churches around Berlin and Brandenburg. In the first step, 220 churches will offer wireless LAN and the plan is to install Internet services in all 3,000 Protestant churches in the region. The cleverly named project is called “Godspot” and the first hotspot – sorry – Godspot will appear in the famous Französischer Dom in Berlin’s busy Gendarmenmarkt square and the iconic Kaiser Wilhelm Memorial Church in Berlin’s Breitscheidplatz.

Godspot’s aim is to build a safe and familiar home for the Protestant Church in the digital world. The places of communication have shifted and much of it now happens in digital social networks. According to the Church, Godspot’s use has no strings attached. There’s no registration, no login, and the Church insists it won’t push advertising or retain users’ personal information. However, when users first sign on, they’ll be taken to a webpage with information on the church building and local parishes.

Legal obstacles to WIFI in German Protestant Church

Germany currently has tough legislation regarding a network provider’s accountability when it comes to the online activities of its users. If, for example, you illegally download software on my network, I face the consequences. Though the German federal government says it’s working to change this legislation, Godspots will be installed prior to any new legislation taking affect. To avoid liability, the Church has appointed a couple of Berlin companies as the service’s legal providers.

Though an estimated 61 percent of Germans are Christian, a report in 2013 by Die Welt claimed that Christians will become a minority within the next two decades. Whether Godspot is an attempt to spread God’s word or an effort to meet the demands of the digital age, Berlin’s churches will surely see an uptick in attendance – if not for the sermons then for surfing the web. However, the end justifies the means as long as the “divine spark” leaps over to the audience.

Business cases for new technologies such as wireless LAN solutions seem to be unlimited and find their way into almost all parts of our daily and even our spiritual life. Teldat as a manufacturer of access points and provider of wireless LAN solutions is looking forward to the future developments of this market.

A total change to the automobile industry with 5G

Electric vehicle center display Interface conceptAs Advanced LTE becomes more of a day to day reality, industry is quickly moving towards the next mobile generation, 5G technology, which will bring important improvements in terms reduced latency, increased reliability and higher throughput.

The automotive industry is one of the industrial sectors that is looking into 5G very seriously to bring in changes of all types related to their industry. From the increased use of IoT to connect cars with each other to connecting with traffic and road services or pedestrians, to the use of mobile technology to offer the driver totally new on-board services.

5G Technology and automotive driving

Automated driving, is one issue that could start to progress with the introduction of 5G. To date, although some automated driving tests have been carried out by cars with on-board sensor systems and the use of high resolution digital maps, it is foreseen that without the use of wireless technology it will not be possible to progress. Vehicles will not only need use an on-board sensor system, but they will also need a vehicle to vehicle and a vehicle to Infrastructure connection to progress and converted automated driving into a reality.

It is forecasted that when automated or semi-automated driving is achieved, wireless connectivity will enable drivers and passengers as well, an increased use of on-board services for leisure and entertainment or allow to use the car as a work space. It is expected that as car connectivity increases, manufacturers will introduce dashboards to facility the above types of services.

Even without automated driving, vehicle connectivity will play an important role increasing road safety and traffic efficiency. Connectivity can inform the driver of road hazard warnings (breakdowns, road works, accidents, etc.), violating traffic lights or speed limits, approaching emergency vehicles, etc.

Intelligent navigation systems, using geo-positioning and digital maps to guide drivers and increase route efficiency are already in use. However, these units will be able to collect data from other cars, road authorities, etc. to calculate the best routes to be taken in any moment in time. The use of 5G networks, IoT and Big Data will allow these systems to become increasingly effective.

New services for an emerging industry

Connectivity and the possibility of data collection related to vehicles will impulse many new business models. “Pay as you drive” and “Mobility as a Service” (MaaS) are two of these new business models that are anticipated to be introduced into the market. Pay as you drive is already used by car rental companies and even insurance companies. However, with the use of 5G and IoT for the accurate collection of data, this service will surely increase drastically as measuring the data will be much easier and quicker.

MaaS, the other business model, would allow drivers to find the best means of transport available at any moment in time, so that cars and public transport can be combined. Depending on traffic conditions, train or bus timetables, etc. drivers will be able to plan their route or voyage efficiently. There is also a third business model that would save drivers not only time, but also reduce vehicle expenses, when used by manufacturers. This is “predictive maintenance”. By collecting big data from sensors placed in a large amount of vehicle parts, the manufactures could calculate when faults or breakdowns would most likely happen alerting vehicle owners directly on the vehicle dashboards or calling them in to have maintenance carried out.

In order to implement all these services that have been previously mentioned, the automotive industry and the telecommunications operators would have to work together to firstly ensure that that 5G is always available along roadsides. Especially in more rural areas, where coverage of cellular networks has always tended to be less available, for intelligent transport services to work correctly, 5G network coverage needs to be widespread across the whole road network. Additionally, roaming would play a very important role, in particular for professional transport and logistics companies. Apart from widespread 5G networks, telecommunication operators would also need to work closely with the automobile industry and traffic control to ensure prioritization of important services such as road safety for example.

As with all the developments that have occurred within cellular technology, for sure 5G networks for the automotive industry and other industrial sectors will arrive and introduce important changes to the way that companies place their products onto the market and the way that clients and users benefit from these products.

Application visualization and control needs

colibri managerWhy has visualization of applications over network become such a critical point?

Firstly, the move of IT infrastructure to the cloud means our current understanding of level 3 network traffic (IP) is insufficient to characterize applications transmitting over said network: Application servers had fixed, known IP addresses in traditional data centers, whereas IP addressing in cloud is no longer controlled by the organization using these services.

Secondly, far more applications (both corporate and personal) are in circulation today than a few years ago. Said applications have not, in general, been designed with bandwidth optimization in mind and all have different needs and behaviors. This means some applications can (and do) adversely affect others if the network is incapable of applying different policies to prevent this.

The vast majority of applications use http and https for communication mainly to evade, or minimize, possible negative effects arising from security policies or IP addressing (NAT) over the network. This means the transport layer (TCP or UDP port) is unable to adequately identify network applications as they tend to use the same ports (http 80 and https 443).

To further aggravate the problem, companies must provide connectivity to an enormous array of ‘authorized’ local devices. Remote local networks today, unlike the traditional single terminal of yesterday, are more varied and far less controlled: Wireless offices, guest access, home access, BYOD, IoT etc.  Consequently, the difficulties in analyzing traffic, caching systems and CND also escalate

Finally this greater diversity increases security risks: viruses, malware, bots, etc. These, in turn, tend to generate “uncontrolled” network traffic that needs to be detected and characterized.  At this point, the close link between visibility and security at the network level raises its head (with all its repercussions and analysis), a subject that we’ll tackle another day.

Conclusión

The above points make it very clear that analyzing network traffic has become more and more intricate over the last few years, boosting the need for new tools with greater capacity. Otherwise, we simply won’t know what is going through our network, placing it not only at risk but unnecessarily increasing its upkeep.  Given the tremendous amount of information handled, using tools that are able to intelligently filter the information received and provide high level of granularity in analysis and reports is absolutely essential.  It’s here where big data analysis technologies bring huge advantages when compared to traditional tools.

Well aware of this recent difficulty, users need application visibility and control solutions to meet these new needs.

  • Said solutions must be able to scale down to small and medium corporate offices, and offer a sound compromise between CPU requirements (cost), needed for DPI (Deep Packet Inspection), and number of detected applications (customer service and quality of application detection).
  • Integrating intelligent detection in remote routers and the use of a centralized management tool, versus current market solutions based on proprietor remote point polling and hardware appliances (also proprietor), allows for excellent detection granularity and affordable exploitation, scalable to any size of network.
  • Instead of opting for proprietor solutions, it’s crucial to use suppliers who adopt standard protocols to communicate visibility information (Netflow / IPFIX for example). This allows customers to use their own information collection methods if they so wish.

As part of its access routers and management tool, Colibri Netmanager, Teldat offers visibility and control solutions for network applications capable of meeting the aforementioned market needs.

 

necesidades.

Smart Grids and reliability of communications

Word Cloud Smart GridSmart Grids can be thought of as computer intelligence and networking abilities applied to a dumb electricity distribution system, with the aim to improve operations, maintenance and planning so that each component of the electric grid can at the same time talk and listen. This set of operational features leads to automation, a key aspect in smart grid technologies.

But of course, in order be able to talk about Smart Grids, reliability of communications must come first, providing the basic infrastructure that ensures the trustworthiness of the link.

Its importance becomes clear when there is an electrical overload and real-time monitorization of the grid is required. In these cases, it is crucial to be able to take immediate action upon the network to avoid cascade failures in the electricity grid.

Smart-Grid Communications

Nowadays, a regular Smart-Grid deployment can include thousands of remote points, typically unattended and rather isolated. Since utilities can’t always use their own infrastructure, especially in areas where the deployment of their networks are limited or scarce, the usage of third party networks provided by carriers reduce the necessary investment.

The following points must be considered when deploying a Smart-Grid network:

  • Smart-Grid communications require advanced networkingprotocols such as VLANs, VRFs, QoS and Policy routing to guarantee service isolation
  • Multi-carrier fall-back, in order to optimize service continuity
  • Advance troubleshooting and management for easy deployments, specially under unknown conditions
  • In-house HW design for flexible product development and integration of the latest technologies
  • And of course, corporate security for critical applications, so that security threats are minimized

Although all these features contribute to ensure communications, corporate security mechanisms are by far the most critical due to three inherent factors to Smart Grids:

1.       The isolation of locations points, than can also pose serious threats. In other words, how can we avoid access to the network in these unattended points? A single solution does not exist, and it is necessary to employ a set of technologies and tools, including:

-          Device authentication with AAA using TACACS+

-          Systems for detection of physical access (e.g. door sensors, cabinet alarms, etc.)

-          Passwords for DMVPN based on serial number

-          Real time monitoring system

-          Destination packet filtering based on device MAC address

2.       The existence of malware propagation, and the need to be fully protected against it. Common solutions among the largest electricity companies include dynamic rules per sessions, traffic pattern detection and SCADA firewalls; protocol-based filtering & traffic patterns detection; or PAT firewall & routing policies per traffic type.

3.       The importance of data integrity, achieved by using DMVPNs to interconnect remote locations and ease management; IPSec, with the latest encryption (RC4, DES, 3DES and AES 256) and authentication (SHA-1 & 2); and digital certificates such as X.509v3, LDAP, PKIX, PEM and DER.

A different problem comes up when device failures occur, which require replacements and usually become a source of expense (both of money and time) mainly due to the distance that needs to be covered to get to them. If a power failure is disturbing, imagine it lasting a few hours or even days for a cause that could be avoided using state of the art technologies.

Hardware failures on remote locations can be triggered by the following circumstances:

-          Dust & Temperatures. Because of their very nature, industrial devices are not allowed to make use of fans to keep temperatures below dangerous levels. But at the same time, unattended locations can vary from very low freezing temperatures in the winter to extreme heat in the summer. And there is also dust, which by leaking into a standard, non-sealed device, could severely affect the fan performance and circuitry. For that reason, and in order to ensure operation under the most radical circumstances, devices must use state of the art technologies to endure these scenarios without breaking down or malfunctioning.

-          Electromagnetic Discharges. The powerful electric currents that flow through a Smart Grid create EM fields that, at times, interfere with other devices such as switches. As a result, they can become untimely activated, causing unpredictable effects in the grid and affecting other electronic devices in the surroundings. This, in turn, can lead to a series of internal voltaic arches that, in a cascade fashion, can literally burn down the devices inside the grid unless they are able to cope with potential differences on the order of kV.

-          Power supply. Power supply is not always as stable as one would like it to be. This is particularly true at substations and transformation centers, where sharp variations of energy may occur. And there can be grounding differences too, fairly frequent when it comes to low & medium voltage substations. Outstanding standards and the presence of special multirange power supply units that endure these high voltage peaks becomes a necessity.

In Teldat, our continuous and absolute commitment to RTD has allowed us to overcome this complex grid of challenges, working alongside with the largest electricity companies, understanding their needs and incorporating them into our Regesta router family.