Go to Source
In-memory computing is one of the most talked about technologies right now. But how the software works, how it can benefit enterprises and their processes is a completely different story – one that needs to be told.
At the basic level, in-memory computing replaces slower disc-based data tables and instead uses the random-access memory (RAM) of a computer or a cluster of computing resources in the cloud, offering significant speed and cost benefits.
Combining ERP software with in-memory will preserve the traditional database traits of atomicity, consistency, isolation and durability (ACID) that are there to guarantee transaction integrity. Unlike pure in-memory applications, ERP with in-memory may include a hybrid approach, with both an in-memory and disc-based database. This helps maintain RAM reserves by allowing an application to decide which parts of transactional data are disc-based and which should be in-memory.
When choosing to adopt in-memory as part of your ERP strategy, there are three main questions you need to ask first.
The incentives that drive a company to adopt in-memory computing are straightforward. Some large enterprises may be harnessing big data from social media and other online sources and harvesting insights from an in-memory data set. But for many industrial companies, the most compelling case for in-memory technology may stem from the need of senior managers to view aggregated enterprise data in real-time.
In-memory computing can also be a way for an ERP vendor to address underlying issues in an application’s architecture. If the original enterprise software architecture was too complex, the application may have to look in more than a dozen locations in a relational database to satisfy a single query. They may be able to simplify this convoluted model and speed up queries by moving entirely from disc-based data storage to in-memory.
But an IT department may find running the entirety of an application in-memory may not prove economically attractive. While the cost of RAM or flash memory has been falling, it still costs as much as $20,000 to $40,000 for a 1TB RAM cluster. For scalability and cost reasons, it may be wise for businesses to be selective about what portions of the database they run in-memory. Moreover, ERP applications that run entirely in-memory tend to force end user companies to staff up with technical expertise familiar with this very specific technology.
The main benefit is enhanced processing speed. Data stored in-memory can be accessed hundreds of times faster than would be the case on a hard disc – which is important for businesses dealing with larger data sets and non-indexed tables that need to be accessed immediately.
Within ERP, this speed is particularly useful when companies are running ad-hoc queries, say, to identify customer orders that conform to specific criteria or determine which customer projects consume a common part. Enterprise software run with traditional disc-based storage is likely to bog down if the database running business transactions in real-time is also responding to regular queries from the business intelligence systems.
But an in-memory application should be, in a manner of speaking, a type of hybrid solution between RAM and disc-based storage. In theory, a pure in-memory computing system will require no disc space. But this is impractical since modern enterprise applications can store both structured and unstructured data such as photos, technical drawings, video and other materials that are not used for analytical purposes – but would consume a great deal of memory. The benefit of moving imagery – for example, photos an electric utility engineer may take of meters – in-memory would be minimal and the cost high. This data is not queried, does not drive visualizations or business intelligence and would consume substantial memory resources.
A hybrid model containing both a traditional and in-memory database working in sync enables the end user to keep all or part of the database in-memory, so that columns and tables that are frequently queried by business analytics tools or referenced in ad hoc queries can be accessed almost instantly. Meanwhile, data that doesn’t need to be accessed as frequently is stored in a physical disc, enabling businesses to get real-time access to important information while making the most of their current IT systems.
The cost of RAM is one reason that it may be more desirable to simply use in-memory to speed up processing in specific parts of the database that are frequently queried. This delivers the greatest benefit with minimal cost for additional RAM. Rather than keeping an entire application database in-memory, most companies may prefer to rely on a database kept in traditional servers or server clusters on-premise or in the cloud, keeping only highly-trafficked data in-memory.
Determining which sections or how much of an ERP database should be run in-memory will depend on the use case, but there are three main areas in-memory computing can help optimize:
Real-time streaming of data, whether it is actual big data that resides outside a transaction system or data from within your ERP, requires tremendous computing resources. If this information in a traditional data warehouse will be old and less useful, but continuous queries on the transactional database could lead to performance issues. Even traditional business intelligence processes in industries that can benefit from real-time or predictive analytics require real-time streaming data rather than periodic updates, making in-memory an attractive option.
The rapid development of the so-called Internet of Things (IoT) has pushed many old industries to the brink, forcing most companies to fundamentally reevaluate how they do business. Few have felt the reverberations of the IoT more than the microchip industry, one of the vital drivers of the IoT that has both enabled it and evolved alongside of it.
So how exactly is chip design evolving to keep up with the IoT’s breakneck proliferation? A quick glance at the innerworkings of the industry which enables all of our beloved digital devices to work shows just how innovative it must be to keep up with today’s ever-evolving world.
Developing microchips, which are so diverse that they’re used to power coffee makers and fighter jets alike, is no simple task. In order to meet the massive processing demands of today’s digital gadgets, chip design has been forced to take some tips from the most efficient computer known to man: the human brain.
Today’s interconnected world requires more complex chips than those of the past. Unlike the do-it-all chips of yesteryear, today’s chips are often purpose-built for a specific task. By designing specialized chips, these companies are better equipped to meet the unique demands of today’s tech giants, who may require a special chip specifically designed for their own brand of autonomous cars or drones, to name but a few products.
Silicon Valley’s ability to churn out cheaper and faster chips better capable of meeting the demands of a 21st Century economy has been vital to the growth of the IoT, which itself has put money right back towards chip development. Today’s internet users who are heavy users of social media services, largely have the IoT to thank for incentivizing the chip industry to diversify itself so much; global semiconductor sales now top a staggering $335 billion a year, in a clear display of how many different varieties of chips are needed.
As the IoT continues to drive investment in areas like cloud computing, sensors, and interactivity, an even more diverse array of chips will be needed to power tomorrow’s unique devices. The semiconductor industry is already pivotingtowards new business goals; rather than focusing on processing power like in the past, tomorrow’s chips will have a heavier focus on miniaturization, software compatibility, and better security.
Aaeon Technology Europe and Stream Technologies have announced that they have integrated their LPWA LoRa solutions to enable more cost effective and scalable low-power IoT network deployments. The two companies are no strangers, as they already have an existing partnership that includes an integration solution between Aaeon’s hardware and Stream’s cellular connectivity services, that has been deployed globally across multiple verticals including smart vending and industrial automation.
With the launch of Aaeon’s LoRa gateway, the two companies’ customers will now be able to leverage Stream’s IoT-X connectivity management platform to simplify and scale their IoT deployments. Stream’s IoT connectivity management platform, IoT-X, is fully integrated with Stream’s private APN for global cellular connectivity, LoRaWAN network server for network deployments, data infrastructure for routing of data from IoT devices to third party applications.
“To enable the adoption of Industrial IoT (IIoT) it is fundamental to offer customers solutions that make the transition from legacy applications easier,” said Marco Barbato, Product Director at Aaeon Europe. “A professionally managed connectivity is crucial, since it covers the transfer of the data and its security. LoRa is one of leading technologies of IIoT and partnering with Stream allows us to deliver a high level integrated solution with our LoRa gateway and network server to our industrial customers”.
Aaeon is a manufacturer of advanced industrial and embedded computing platforms for the IoT and Industrial Internet applications, and is a member of the LoRa Alliance.
“Aaeon is demonstrating a strong commitment to simplify the IoT for customers worldwide by adding LoRa to their existing technology stack,” said Mohsen Shakoor, Strategic Partnerships at Stream. “Customers are reducing their network deployment risks by partnering with Stream and AAEON, as we have a wealth experience in IoT connectivity and IoT connectivity hardware respectively. Together with AAEON, we will be delivering low cost, scalable and secure LoRa network deployments.”
The partnership is the latest in a recent series from Stream. A few weeks ago, the UK-based company announced a collaboration with IoT gateway company Kerlink to integrate their respective solutions.
For full story, please click here.
Dr. Pantea Lotfian (Managing Director of Camrosh) writes:
Filling in the questionnaire does not require any technical knowledge. You can find the survey questionnaire directly here: https://survey.zohopublic.eu/zs/gACCDE
We intend to publish the results by mid November 2017 and will be sharing the outcomes on the survey website (https://digitalsurvey.tech) and disseminate the results via various social media and through umbrella organisations.
The survey will be evaluated anonymously and your information will not be shared with any third parties.
We thank you in advance for your participation and are looking forward to receiving your replies to help create the momentum and the community for business growth and success in the UK.
Why do we want to know?
The Internet of Things has been a buzz word for a while now and is still buzzing strongly, mainly due to its ever increasing impact on consumer products and services and the impact on business models in virtually every industry.
The IoT is not one technology, but rather a system built of various technologies, such as sensors, network connectivity, and data analytics to name a few. Advances in each of these areas have over the past five years dramatically increased the power of the IoT to impact businesses in many different ways, with improved operational efficiencies and cost reductions currently being the main cited benefits by solution providers.
However, the IoT opens up also other wide-reaching opportunities for businesses. Particularly for SMEs with limited resources, when they combine a deep understanding of their business and markets with strategic forward planning to adopt IoT solutions to enable growth.
Successful adoption of IoT solutions by SMEs
For full story, please click here.
Telecoms company Onecom has appointed experienced telecoms specialist Graham Doe as head of Internet of Things as it focuses on smart connected services for businesses as a major growth area.
The newly-created post comes after Onecom signed a major five-year deal with Vodafone to develop, launch and manage IoT services nationwide.
Doe has more than 10 years of sales and management in the IoT space, has worked for some of the UK’s largest telecoms companies including Telefonica O2 UK and Arqiva.
He has considerable knowledge and experience in existing and emerging IoT connectivity and hardware solutions across a range of sectors including energy and utilities, telecoms, healthcare, transport and security.
“Innovation in IoT services is a major focus for Onecom, and its importance is reflected in the quality of this appointment,” said Aaron Brown, chief operating officer at Onecom.
“Graham brings with him a wealth of experience in telecoms and particularly in IoT, along with an instinctive understanding of how connected hardware is going to transform UK businesses over the coming year.
“His expertise and dedication to outstanding customer experience make him a perfect fit for Onecom.”
Onecom, which has headquarters in Hampshire and offices around the UK and Northern Ireland, is the UK’s largest independent business telecommunications provider, operating from 12 regional offices, including London, Cardiff, Southampton, Plymouth, Leeds, Telford, Norwich and Brighton.
Doe said: “Onecom has an exciting vision for the use of IoT technology and understands the virtually unlimited possibilities that it brings for businesses.
“I am excited to take this new role and looking forward to driving Onecom’s dominance in the IoT market, achieving ambitious sales growth targets and leading an expanding sales and marketing team.”