Edgewater Fullscope Poised to Showcase Unique Industry Solutions that Drive Growth at Microsoft Ignite

Athens, AL, Sept. 07, 2017 (GLOBE NEWSWIRE) — Edgewater Fullscope Poised to Showcase Unique Industry Solutions that Drive Growth at Microsoft Ignite, Also Have Presence at Microsoft Envision

Athens, AL – September 5 – Edgewater Fullscope, a leading provider of Microsoft Dynamics 365 (formerly Dynamics AX and CRM) as well as BI and consulting services, will showcase unique industry solutions at Microsoft’s Ignite conference and have a presence at Ignite’s connected event, the Microsoft Envision conference. Microsoft Ignite will take place from September 25-29 at the Orange County Convention Center in Orlando, FL. Microsoft Envision will take place across the street at the Hilton Orlando from September 25-27.

Edgewater Fullscope will be in booth 2060 at the sold-out Ignite conference. Attendees can stop by to learn how Fullscope delivers successful digital transformations that drive growth with Microsoft Dynamics 365. Visitors can also see one-on-one demos of ERP, CRM and BI solutions highlighting specific benefits for manufacturing companies. We will also be offering advisory and technical services, leveraging Office 365, SharePoint, Azure, .NET and the rest of the Microsoft stack to provide innovative solutions that create business value. Fullscope’s industry experts will be on hand to discuss how Microsoft applications and the Internet of Things can be harnessed for business growth, strategy and efficient operations.

The Microsoft Ignite conference offers IT professionals the opportunity to connect with peers, explore new technology and get questions answered, as well as access over a thousand hours of content like training sessions, deep dives on products and live demos.

Microsoft Envision is a thought leadership conference where business leaders will gain strategic insights to help them engage customers, empower employees, optimize operations and transform their products in new and impactful ways through the power of today’s technology. Join Edgewater Fullscope at Envision to develop your strategic roadmap and explore what is possible through digital transformation. With your Microsoft Envision registration, access to Microsoft Ignite is included.

Go to Source

What Does Oracle’s Embrace of CNCF Mean for Developers?

Fueled by the Docker phenomenon, Linux containers have gone mainstream as the easiest way to deploy software applications, packaged as bite-size services, in the cloud. But developers now face new questions, like how to orchestrate all these containerized applications, how to manage containers across multiple clouds, and whether serverless computing will make all this obsolete.

The good news for developers is how much energy is directed at addressing all these questions. The latest example comes from Oracle this week joining the Linux Foundation’s Cloud Native Computing Foundation, which focuses on driving adoption of container-packaged, microservices-oriented computing at enterprise scale. The Kubernetes container orchestration tool is a star technology in the CNCF. In addition to joining CNCF as a platinum member, Oracle has released Kubernetes on Oracle Linux, dedicated engineers to the Kubernetes project, and open sourced several tools including Smith, CrashCart, and Terraform Installer for Kubernetes on Oracle Cloud Infrastructure.

These moves bode well for cloud-native developers who want to avoid vendor lock-in while moving workloads to the cloud. “If Kubernetes provides a way to select the cloud you use, you gain maximum flexibility to choose the best environment,” says Bob Quillin, vice president of developer relations for the Oracle Container Native Group. “More and more of our customers have a multicloud approach.”

Too Many Microservices

The challenge of a microservices approach is that there are many more services to manage, and those services are ephemeral as they scale up and scale down. “Without automation, orchestration, and a built-in administration layer using tools like Kubernetes, you cannot take this to the enterprise level,” says Quillin.

With Kubernetes as the anchor, Quillin predicts that the CNCF’s projects will prove useful for cloud-native apps, DevOps-style continuous delivery, and hybrid workloads.

“CNCF was formed around the concept of an open, cloud-neutral, standards-based approach. That’s the key reason we’ve joined; it’s the hub for open source components that already have success in the field,” he says. “It supports those projects but also lets them run independently.”

Quillin knows this fast-morphing cloudscape well. In 2014, he founded StackEngine, a container technology company that was acquired by Oracle in 2015. His Austin-based team continued its efforts and, in 2016, launched Oracle Container Cloud Service. Oracle and CNCF share the goal of supporting Kubernetes and related tools for containerized applications at enterprise scale, across multiple clouds.

“With Oracle’s commitment to enterprise-grade networking and security, and bare metal performance being a core competency, supporting Kubernetes and its enterprise cloud adoption is very important to our customers,” says Quillin.

Will CNCF Make the Open Stack Mistake?

But can the CNCF avoid the fate of Open Stack, an earlier effort to create open source private and public clouds that had mixed results and poor adoption? CNCF’s approach is shaped by the community, he says, whereas Open Stack’s was defined more by vendors: “They had a complex stack of interconnected components that turned out to be very difficult to roll out in an enterprise.”

Several engineering teams at Oracle are dedicated to the Kubernetes effort, particularly in security, networking, and federation. TJ Fontaine, former Node.js project lead, is now Oracle’s lead contributor to Kubernetes. Jon Mittelhauser, Oracle vice president of container native engineering, joins CNCF’s governing board.

Quillin notes that there’s more to CNCF than Kubernetes, and that Oracle is using several promising tools under the CNCF umbrella, including Prometheus for monitoring, Open Tracing for instrumenting distributed code, gRPC for remote procedure calls, and OCI, the Open Container Initiative. “Kubernetes is the poster child for CNCF. But in order to operationalize containers, you also need to trace and monitor and debug and control,” he says.

Complementary to Serverless, DevOps

But what will the serverless trend mean for Kubernetes, and containers more broadly? Serverless and containers are complementary, not mutually exclusive, Quillin explains, and CNCF is likely to address how to make serverless work across clouds as well.

“Serverless means you begin to think more like a developer, with infrastructure taken care of for you. Amazon’s Lambda is notable, but that’s a closed solution that runs only on AWS,” Quillin says. “It’s not viable until it can be used cross-cloud or on premises. We are excited to work with the industry to develop an open, cloud-neutral serverless techonology, and CNCF is likely to be a leader in that effort.”

As more and more tools emerge for born-in-the-cloud apps, developers are embracing the microservices pattern and applying it to chunks of existing monoliths. This architecture pattern is predicated on continuous integration and delivery, which are tenets of DevOps. Today, Quillin says, tools like Kubernetes and Docker are leading the culture.

“Container technology is the killer app for DevOps because it’s intended to connect developers to production,” says Quillin. “A container is the best artifact to move from a developer’s laptop to QA to staging to production, so it’s a true enabler.”

For full article, click here

Go to Source

In-memory – A New Approach to ERP

In-memory computing is one of the most talked about technologies right now. But how the software works, how it can benefit enterprises and their processes is a completely different story – one that needs to be told.

In-memory computing is one of the most talked about technologies right now. But how the software works, how it can benefit enterprises and their processes is a completely different story – one that needs to be told.

At the basic level, in-memory computing replaces slower disc-based data tables and instead uses the random-access memory (RAM) of a computer or a cluster of computing resources in the cloud, offering significant speed and cost benefits.

Combining ERP software with in-memory will preserve the traditional database traits of atomicity, consistency, isolation and durability (ACID) that are there to guarantee transaction integrity. Unlike pure in-memory applications, ERP with in-memory may include a hybrid approach, with both an in-memory and disc-based database. This helps maintain RAM reserves by allowing an application to decide which parts of transactional data are disc-based and which should be in-memory.

When choosing to adopt in-memory as part of your ERP strategy, there are three main questions you need to ask first.

1. What are the drivers for in-memory adoption?

The incentives that drive a company to adopt in-memory computing are straightforward. Some large enterprises may be harnessing big data from social media and other online sources and harvesting insights from an in-memory data set. But for many industrial companies, the most compelling case for in-memory technology may stem from the need of senior managers to view aggregated enterprise data in real-time.

In-memory computing can also be a way for an ERP vendor to address underlying issues in an application’s architecture. If the original enterprise software architecture was too complex, the application may have to look in more than a dozen locations in a relational database to satisfy a single query. They may be able to simplify this convoluted model and speed up queries by moving entirely from disc-based data storage to in-memory.

But an IT department may find running the entirety of an application in-memory may not prove economically attractive. While the cost of RAM or flash memory has been falling, it still costs as much as $20,000 to $40,000 for a 1TB RAM cluster. For scalability and cost reasons, it may be wise for businesses to be selective about what portions of the database they run in-memory. Moreover, ERP applications that run entirely in-memory tend to force end user companies to staff up with technical expertise familiar with this very specific technology.

2. How will it optimize the speed of queries and reports? 

The main benefit is enhanced processing speed. Data stored in-memory can be accessed hundreds of times faster than would be the case on a hard disc – which is important for businesses dealing with larger data sets and non-indexed tables that need to be accessed immediately.

Within ERP, this speed is particularly useful when companies are running ad-hoc queries, say, to identify customer orders that conform to specific criteria or determine which customer projects consume a common part. Enterprise software run with traditional disc-based storage is likely to bog down if the database running business transactions in real-time is also responding to regular queries from the business intelligence systems.

But an in-memory application should be, in a manner of speaking, a type of hybrid solution between RAM and disc-based storage. In theory, a pure in-memory computing system will require no disc space. But this is impractical since modern enterprise applications can store both structured and unstructured data such as photos, technical drawings, video and other materials that are not used for analytical purposes – but would consume a great deal of memory. The benefit of moving imagery – for example, photos an electric utility engineer may take of meters – in-memory would be minimal and the cost high. This data is not queried, does not drive visualizations or business intelligence and would consume substantial memory resources.

A hybrid model containing both a traditional and in-memory database working in sync enables the end user to keep all or part of the database in-memory, so that columns and tables that are frequently queried by business analytics tools or referenced in ad hoc queries can be accessed almost instantly. Meanwhile, data that doesn’t need to be accessed as frequently is stored in a physical disc, enabling businesses to get real-time access to important information while making the most of their current IT systems.

3. Where should I be deploying in-memory computing?

 The cost of RAM is one reason that it may be more desirable to simply use in-memory to speed up processing in specific parts of the database that are frequently queried. This delivers the greatest benefit with minimal cost for additional RAM. Rather than keeping an entire application database in-memory, most companies may prefer to rely on a database kept in traditional servers or server clusters on-premise or in the cloud, keeping only highly-trafficked data in-memory.

Determining which sections or how much of an ERP database should be run in-memory will depend on the use case, but there are three main areas in-memory computing can help optimize:

Analysis of Large Data Sets

Real-time streaming of data, whether it is actual big data that resides outside a transaction system or data from within your ERP, requires tremendous computing resources. If this information in a traditional data warehouse will be old and less useful, but continuous queries on the transactional database could lead to performance issues. Even traditional business intelligence processes in industries that can benefit from real-time or predictive analytics require real-time streaming data rather than periodic updates, making in-memory an attractive option.

Click here for full story.

Go to Source

How chip design is evolving in response to IoT development

As the IoT continues to grow, chip designers will quickly find themselves becoming the most valued part of a billion-dollar industry.

The rapid development of the so-called Internet of Things (IoT) has pushed many old industries to the brink, forcing most companies to fundamentally reevaluate how they do business. Few have felt the reverberations of the IoT more than the microchip industry, one of the vital drivers of the IoT that has both enabled it and evolved alongside of it.

So how exactly is chip design evolving to keep up with the IoT’s breakneck proliferation? A quick glance at the innerworkings of the industry which enables all of our beloved digital devices to work shows just how innovative it must be to keep up with today’s ever-evolving world.

Building a silicon brain

Developing microchips, which are so diverse that they’re used to power coffee makers and fighter jets alike, is no simple task. In order to meet the massive processing demands of today’s digital gadgets, chip design has been forced to take some tips from the most efficient computer known to man: the human brain.

Today’s interconnected world requires more complex chips than those of the past. Unlike the do-it-all chips of yesteryear, today’s chips are often purpose-built for a specific task. By designing specialized chips, these companies are better equipped to meet the unique demands of today’s tech giants, who may require a special chip specifically designed for their own brand of autonomous cars or drones, to name but a few products.

Stream and Aaeon collaborate for Industrial IoT solutions

  • Integration of the two companies’ LPWA LoRa solutions

  • Features Stream’s IoT-X connectivity management platform

  • Goal is to further the adoption of Industrial IoT

  • Follows a similar partnership with Kerlink

Aaeon Technology Europe and Stream Technologies have announced that they have integrated their LPWA LoRa solutions to enable more cost effective and scalable low-power IoT network deployments. The two companies are no strangers, as they already have an existing partnership that includes an integration solution between Aaeon’s hardware and Stream’s cellular connectivity services, that has been deployed globally across multiple verticals including smart vending and industrial automation.

With the launch of Aaeon’s LoRa gateway, the two companies’ customers will now be able to leverage Stream’s IoT-X connectivity management platform to simplify and scale their IoT deployments. Stream’s IoT connectivity management platform, IoT-X, is fully integrated with Stream’s private APN for global cellular connectivity, LoRaWAN network server for network deployments, data infrastructure for routing of data from IoT devices to third party applications.

“To enable the adoption of Industrial IoT (IIoT) it is fundamental to offer customers solutions that make the transition from legacy applications easier,” said Marco Barbato, Product Director at Aaeon Europe. “A professionally managed connectivity is crucial, since it covers the transfer of the data and its security. LoRa is one of leading technologies of IIoT and partnering with Stream allows us to deliver a high level integrated solution with our LoRa gateway and network server to our industrial customers”.

Aaeon is a manufacturer of advanced industrial and embedded computing platforms for the IoT and Industrial Internet applications, and is a member of the LoRa Alliance.

“Aaeon is demonstrating a strong commitment to simplify the IoT for customers worldwide by adding LoRa to their existing technology stack,” said Mohsen Shakoor, Strategic Partnerships at Stream. “Customers are reducing their network deployment risks by partnering with Stream and AAEON, as we have a wealth experience in IoT connectivity and IoT connectivity hardware respectively. Together with AAEON, we will be delivering low cost, scalable and secure LoRa network deployments.”

Partnerships and ecosystems

The partnership is the latest in a recent series from Stream. A few weeks ago, the UK-based company announced a collaboration with IoT gateway company Kerlink to integrate their respective solutions.

For full story, please click here.

Go to Source

How can IoT help your business grow?

Have you considered how the Internet of Things (IoT) can help your business?

Have you adopted any IoT solutions?

Have they changed your ability to save costs, increase efficiencies and reinvest in growth?

Dr. Pantea Lotfian (Managing Director of Camrosh) writes:

Camrosh Consulting invites you to take part in our Internet of Things Survey 2017.

Filling in the questionnaire does not require any technical knowledge. You can find the survey questionnaire directly here: https://survey.zohopublic.eu/zs/gACCDE

We intend to publish the results by mid November 2017 and will be sharing the outcomes on the survey website (https://digitalsurvey.tech) and disseminate the results via various social media and through umbrella organisations.

The survey will be evaluated anonymously and your information will not be shared with any third parties.

We thank you in advance for your participation and are looking forward to receiving your replies to help create the momentum and the community for business growth and success in the UK.

Why do we want to know?

The Internet of Things has been a buzz word for a while now and is still buzzing strongly,  mainly due to its ever increasing impact on consumer products and services and the impact on business models in virtually every industry.

The IoT is not one technology, but rather a system built of various technologies, such as sensors, network connectivity, and data analytics to name a few. Advances in each of these areas have over the past five years dramatically increased the power of the IoT to impact businesses in many different ways, with improved operational efficiencies and cost reductions currently being the main cited benefits by solution providers.

However, the IoT opens up also other wide-reaching opportunities for businesses. Particularly for SMEs with limited resources, when they combine a deep understanding of their business and markets with strategic forward planning to adopt IoT solutions to enable growth.

Successful adoption of IoT solutions by SMEs

  1. Nayland Hotel: a family owned leisure provider.
    By adopting IoT enabled energy solutions they reduced water and energy costs by 80% and 50% respectively

For full story, please click here.

Go to Source