UK and France join forces to speed up AI development

UK and France

UK and France

THE UK AND FRANCE have teamed up in the name of artificial intelligence (AI) and cybersecurity in a bid to boost future developments in these areas.

Ministers from the two county’s governments made the decision to join forces on Thursday in hope that the arrangement will foster cross-Channel collaboration between academics, industry and government and thus “help both countries seize the economic and social benefits of fast-developing tech such as AI”.

The UK’s Digital, Culture, Media and Sport Secretary minister, Matt Hancock pioneered the initiative and to get the ball rolling on the deal, met his French counterpart, Françoise Nyssen, at the UK France Summit. It was hosted by the prime minister and the French president, Emmanuel Macron, at Royal Military Academy in Sandhurst.

Hancock said the two countries will establish “cutting-edge digital conference” as part of the new pact, which will take place later this year and “see our world-leading experts in cybersecurity, digital skills, artificial intelligence, data and digital government share their talent and knowledge”.

He added: “The UK and France are strengthening ties in technology and innovation. Both countries benefit when our digital economies are strong and the event will deepen our bonds and foster cross-Channel collaboration between those at the forefront of modern technology.”

The new deal between the two countries is likely to bring very different but valuable talents to the table. For instance, the UK tops the list in Europe for global tech investors, with its tech firms attracting more venture capital funding than any other European country in 2017. And in December it was named by Oxford Insights as the best-prepared country in the world for AI implementation.

As for France, the country has made big strides lately in creating new tech businesses and encouraging entrepreneurs, with Paris’s newly built Station F, a former railway station hosting startups, multinationals and investors, symbolising the country’s ambition.

Read More Here

Article Credit: The Inquirer

Go to Source

AI That Can Predict Death Has Been Given FDA Approval

AI That Can Predict

AI That Can Predict

Using the power of artificial intelligence (AI) doctors could predict when their patients might be knocking on death’s door. This isn’t merely some grim dystopian prop, the researchers hope it could be used to slash the surprisingly high number of unexpected deaths in the US.

Excel Medical, a medical tech company in Florida, has recently been boasting about its new WAVE Clinical Platform, an algorithm that can accurately predict whether medical patients could be at risk of a sudden, unexpected death.

It consists of an integrated system of hospital workstations and digital medical records that includes real-time data on their physiology, past medical history, family history, medications, age, etc. With all this information, the AI can automatically calculate the risk of patient deterioration up to six hours in advance of when a doctor might notice. If it detects anything suspicious, the system can inform an on-call doctor through a smartphone app.

It has also just become the first AI platform of its kind to be cleared by the US Food and Drug Administration (FDA). This decision was based on a series of studies at the University of Pittsburgh Medical Center that showed the AI platform could prevent unexpected deaths in hospitals. Another more recent study, using similar technology by Stanford University, outlined on the preprint server arXiv, explains how a deep-learning algorithm can correctly predict an otherwise-unexpected death in 90 percent of cases.

“Everything we do as an organization aligns toward and supports the goal of eradicating unexpected deaths in hospitals,” Lance Burton, General Manager of Excel Medical, said in a statement. “People may say zero unexpected deaths is unattainable. We say anything other than zero is unconscionable.”

Currently, the plan is to implement this tech in hospitals, but Burton told Healthcare Analytics News that the company hopes to eventually make the AI wearable devices usable for people at home.

Biomedicine is one of the most exciting and promising applications for AI around. Stanford University also developed a new AI that is as accurate as doctors at identifying skin cancer from images, and hopes to get everyday smartphones to carry it.

Read More Here

Article Credit: IFL Science

Go to Source

Five Intelligent Ways Wealth Management Marketers Can Use AI

Wealth Management Marketers

Wealth Management Marketers

Many of us hear the words artificial intelligence and immediately think about movies like I, Robot and Terminator with robo-humans taking over the world. AI is rarely the good guy. But how do we actually dig into the subsets of AI and figure out how to leverage this techno-hip phenomenon and use it to our advantage? I’ve outlined a few easy ways that marketers can leverage AI to make their work more effective, responsive and customized for their audience.

Use chatbots to your advantage.

Chatbots offer an easy way for clients to begin their online journey with you in a self-directed way without human assistance. I can’t recall how many times I have been up late at night trying to get something done online and just needed a few simple questions answered. Chatbots make it easy for clients to get simple tasks done, when they want and where they want, regardless of their time zones or sleeping habits.

Starbucks recently launched a chatbot to help customers customize their online orders. While the coffee industry is different from wealth management, this is just one example of the countless ways to streamline client experience. Think about the experience your clients are getting from their barista and translate that over to their investments.

Employ AI to customize client communications.

Clients expect to have things explained to them in a language they understand and expect information to be accessed immediately, especially concerning something as critical as their investment performance. According to Narrative Science, a leader in natural language generation technology and a partner of F2 Strategy, “Artificial intelligence now makes it possible for organizations to craft personalized, automated communications at scale — exactly what firms need to deliver the cost-effective, high-quality personalized services their customers are seeking.” Look for ways to streamline and scale targeted content customized for each client in order to grow the relationship they have with you.

Compliance can be the easiest part of your job.

Don’t believe me? What can sometimes be the most daunting part of any marketer’s job can now be seamless and pain-free for both parties. AI is controlled by machine-generated outputs that can be automated to align with any existing validation requirements from your compliance and regulatory departments. Once these requirements are inputted correctly, an automated paper trail can be filed and recorded for any future audit needs.

Read More Here

Article Credit: Forbes

Go to Source

10 Artificial Intelligence Trends to Watch in 2018

10 Artificial Intelligence Trends to Watch in 2018

10 Artificial Intelligence Trends to Watch in 2018

Artificial intelligence (AI) is the new technological frontier over which companies and countries are vying for control. According to a recent report from McKinsey, Alphabet invested roughly $30 billion in developing AI technologies. Baidu, which is the Chinese equivalent of Alphabet, invested $20 billion in AI last year.

Companies aren’t the only ones investing time, money and energy into advancing AI technology — a recent article in The New Yorker reported that the Chinese government has been pursuing AI technology aggressively in an attempt to control a future cornerstone innovation.

Considering that some of the largest entities in the world are focused on advancing AI tech, it is all but certain that 2018 will see significant advancements in the space. The following are ten AI trends to look out for this year.

1. AI will become a political talking point.

While AI may help create jobs, it will also cause some individuals to lose work. For example, Goldman Sachs expects self-driving vehicles will cause 25,000 truckers to lose their jobs each month, as reported by CNBC.

Likewise, if large warehouses can operate with just a few dozen people, many of the 1 million pickers and packers currently working in U.S. warehouses could be out of a job.

During the 2016 election, President Trump focused on globalization and immigration as causes of American job-loss, but during the 2018 midterm elections, the narrative could be about automation and artificial intelligence, as more working-class Americans struggle to adjust to the new landscape.

2. Logistics will become increasingly efficient.

We are entering a world in which it will be possible to run a 20,000-square-foot distribution center with a skeleton crew. Companies like Kiva Systems — now Amazon Robotics — use a combination of artificial intelligence and advanced robotics to provide big box retailers with unprecedented logistics solutions.

Warehouses of the future will look nothing like they do today — rather than being designed to accommodate human packers, they will be built for highly capable robots that can work 24/7 and don’t require lighting to see what they are doing.

Kiva Systems, which was purchased by Amazon for $775 million in 2012, creates learning robots that can efficiently find and transport items in Amazon’s warehouses. The technology is already being used today and is expected to play an increasingly prominent role in the company’s quest for faster, less expensive deliveries.

3. Mainstream auto manufacturers will launch self-driving cars.

Tesla was one of the first auto makers to launch a self-driving vehicle. In their effort to keep pace with Tesla, traditional automakers like Audi are poised to release their own self-driving cars in 2018.

The Audi A8 will feature self-driving technology capable of safely shuttling humans without driver input. Cadillac and Volvo are also developing advanced self-driving technology, which will become increasingly visible in 2018.

4. DARPA will develop advanced robo-warriors in plain sight.

The Defense Advanced Research Project Agency (DARPA) has pioneered a number of technological breakthroughs that have impacted our daily lives. The organization, which is responsible for developing new technologies to be used by the American military, was instrumental in developing the internet and GPS navigation — they are no stranger to innovation.

Read More Here

Article Credit: Entrepreneur

Go to Source




NEWS SPREAD MONDAY of a remarkable breakthrough in artificial intelligence. Microsoft and Chinese retailer Alibaba independently announced that they had made software that matched or outperformed humans on a reading-comprehension test devised at Stanford. Microsoft called it a “major milestone.” Media coverage amplified the claims, with Newsweek estimating “millions of jobs at risk.”

Those jobs seem safe for a while. Closer examination of the tech giants’ claims suggests their software hasn’t yet drawn level with humans, even within the narrow confines of the test used.

The companies’ based their boasts on scores for human performance provided by Stanford. But researchers who built the Stanford test, and other experts in the field, say that benchmark isn’t a good measure of how a native English speaker would score on the test. It was calculated in a way that favors machines over humans. A Microsoft researcher involved in the project says “people are still much better than machines” at understanding the nuances of language.

The milestone that wasn’t demonstrates the slipperiness of comparisons between human and machine intelligence. AI software is getting better all the time, spurring a surge of investment into research and commercialization. But claims from tech companies that they have beaten human in areas such as understanding photos or speech come loaded with caveats.

In 2015, Google and Microsoft both announced that their algorithms had surpassed humans at classifying the content of images. The test used involves sorting photos into 1,000 categories, 120 of which are breeds of dog; that’s well-suited for a computer, but tricky for humans. More generally, computers still lag adults and even small children at interpreting imagery, in part because they don’t have common-sense understanding of the world. Google still censors searches for “gorilla” in its Photos product to avoid applying the term to photos of black faces, for example.

In 2016, Microsoft announced that its speech recognition was as good as humans, calling it an “historic achievement.” A few months later, IBM reported humans were better than Microsoft had initially measured on the same test. Microsoft made a new claim of human parity in 2017. So far, that still stands. But it is based on tests using hundreds of hours of telephone calls between strangers recorded in the 1990s, a relatively controlled environment. The best software still can’t match humans at understanding casual speech in noisy conditions, or when people speak indistinctly, or with different accents.

Read More Here

Article Credit: Wired

Go to Source

How the cloud will revolutionize the way we work

the cloud will revolutionize

the cloud will revolutionize

Bill Gates once semi-famously commented that the effects of technology shifts are overrated in the short term, and underrated in the long term. This is very true — particularly when it comes to cloud computing.

The hype cycle is well documented. A technology comes to public attention, lots of people instantly grasp its potential consequences, and our lemming-like instincts cause a big bubble of hype, usually right at the time when the technology is only reaching awkward pre-adolescence. When the technology doesn’t immediately live up to the hype, we are equally lemming-like in rejecting it as useless. Meanwhile, the technology’s kinks work themselves out slowly, and in the long run, it turns out to be even more powerful than we first anticipated.

Consider how this cycle played out with cloud computing.

The cloud was supposed to drive the “consumerization of enterprise.” Because enterprise software could be delivered seamlessly through the cloud, individual workers, and not IT departments, would decide what kind of software people would use for work. This would be a big deal because software designed and sold to be accepted by IT departments is not software designed and sold to meet the needs of end users, but rather of IT departments. This explains the strange paradox whereby the software you use at home gets better and more user friendly every day, while the software you use for work still looks like something from 1997.

Has the original hyped prediction about the “consumerization of the enterprise” come true? Sort of.

Cloud computing has indeed led to a new breed of companies pushing better enterprise software that has been adopted in part from the bottom up in companies. But that “in part” is key: It turns out that “consumerization” is, thus far, less of a revolution than a marketing strategy. It’s not that you build, say, a great file storage system (like, a “Dropbox for the enterprise” Silicon Valley darling that recently went public) and that people magically adopt it from the bottom up, and presto, it becomes the new thing. It’s more that you build a great file storage system, some people adopt it from the bottom up, that gets you a meeting with the head of IT, and then you sell him the designed-for-the-IT-department (and more expensive!) version.

This reality has created lots of shareholder value for Silicon Valley venture capitalists, and probably caused a real, albeit limited, improvement in the overall quality of enterprise software than would have happened otherwise. But it looks very much more like evolution than revolution.

And don’t forget, this only takes us past the first peak of the hype cycle. As Amazon CEO Jeff Bezos — who knows a thing or two about cloud computing — pointed out in one of his excellent shareholder letters, the main virtue of the internet is not so much this or that service, but that its lack of gatekeepers enables permission-less innovation. And permission-less innovation can be transformative, because the best ideas are often the ones that sound crazy to gatekeepers.

Read More Here

Article Credit: The Week

Go to Source

Getting the cloud above the clouds (and surviving a dry spell)

 the cloud above

the cloud above

Sending data to and from different spots on Earth is big business for satellite operators, but Cloud Constellation sees a lucrative opportunity to offer satellites as the ultimate cloud storage solution for sensitive data.

The Los Angeles-based startup, now approaching the third anniversary of its founding, has a way to go to fulfill that dream.

In September, Cloud Constellation signed a launch agreement with Virgin Orbit for 12 LauncherOne missions, and has a memorandum of understanding with Space Systems Loral to build a dozen satellites for the purpose of ultra-secure data storage in space.

But Cloud Constellation, which raised $5 million about 18 months ago in a Series A funding round, says it needs $480 million to really get going. Complicating matters, co-founder and CEO Scott Sobhani, who had been diagnosed with brain cancer, died in July.

“Scott was 53 years old,” said Cliff Beek, who moved up from president to CEO after Sobhani’s death. “The impact on Cloud was emotional, he was a wonderful human being. Anyone blessed enough to have been in his orbit truly misses him.”

Beek, who joined Cloud Constellation as president shortly after the company incorporated in 2015, took over Sobhani’s day-to-day responsibilities in August 2016.

Cloud Constellation’s launch goal for its first satellite has slipped by a year to late 2019, and could end up in 2020, Beek said. He attributed the delays not to Sobhani’s passing, but to a redesign of the constellation, called SpaceBelt, from 16 satellites in low Earth orbit down to 12.

Cloud Constellation wants SpaceBelt to circle the Earth some 460 kilometers above the equator. Beek said nine SpaceBelt satellites will be for communications, while three will be for memory.

A capital raise Beek expects to close by the end of March could kick Cloud Constellation — now 21 people — into higher gear. Beek said the current raise combines two previously planned funding rounds into a single larger one, and would give the startup momentum to build out its constellation, called SpaceBelt.

“We are trying to close a $200 million round, and then move to a debt raise,” he said.

the cloud above

Read More Here

Article Credit: Space News

Go to Source

Learn how to run Linux on Microsoft’s Azure cloud

how to run Linux on Microsoft

how to run Linux on Microsoft

With Linux running two out of five server instances on Azure, it’s past time to learn how to do a good job of running Linux on Microsoft’s Azure cloud

Everyone knows Linux is the operating system of choice on most public clouds. But did you know that, even on Microsoft’s own Azure, 40 percent of all server instances are Linux? Therefore, it behooves sysadmins to pick up not just Linux skills but also to learn how to run Linux on Azure. To make this easier, The Linux Foundation has announced the availability of a new training course: LFS205 – Administering Linux on Azure.

This class provides an introduction to managing Linux on Azure. Whether someone is a Linux professional who wants to learn more about working on Azure or an Azure professional who needs to understand how to work with Linux in Azure, this course gives you the information you need.

There are a wide variety of officially supported Linux distros on Azure. These include CentOSDebianRed Hat Enterprise Linux (RHEL)SUSE Linux Enterprise Server (SLES), and Ubuntu. In short, Azure supports all the major Linux server operating systems.

John Gossman, Microsoft’s Azure distinguished engineer and Linux Foundation board member, explained: “With over 40 percent of VMs on Azure now Linux, we [want] … to make sure customers currently using Linux on Azure — and those who want to — have the tools and knowledge they need to run their enterprise workloads on our cloud.”

There’s a real need for such courses. “As The Linux Foundation and Dice’s 2017 Open Source Jobs Report showed, cloud computing skills are by far the most in demand by employers,” said Linux Foundation General Manager for Training and Certification Clyde Seepersad. Indeed, 70 percent of employers, up from 66 percent in 2016, are seeking workers with cloud experience.

Seepersad continued, “This shouldn’t be a surprise to anyone, as the world today is run in the cloud. Azure is one of the most popular public clouds, and a huge portion of its instances run on Linux. That’s why we feel this new course is essential to give Azure professionals the Linux skills they need, give Linux professionals the Azure skills they need, and train new professionals to ensure industry has the talent it needs to meet the growing demand for Linux on Azure.”

Before taking the class, if you’re new to Azure and Linux, I recommend taking Microsoft’s 20533C Implementing Microsoft Azure Infrastructure Solutions and The Linux Foundation’s Certified System Administrator courses.

This class starts with an introduction to Linux and Azure. It then quickly moves on to advanced Linux features and how they’re managed in Azure. Next, the course goes into container management, either in Linux or with Azure’s open source container technologiessuch as DockerOpenShift, and Pivotal Cloud Foundry. After that, the course covers how to deploy virtual machines (VMs) in Azure and discusses different deployment scenarios.

This is hands-on instruction. Once you’ve set up the VMs up, students will learn how to manage them in an efficient way. The class concludes with techniques on how to troubleshoot Linux in Azure, and how to monitor Linux in Azure using different open-source tools

In a nutshell, students can expect to learn about:

  • Advanced Linux features and how they’re managed in an Azure environment
  • Managing containers
  • Deploying virtual machines in Azure and managing them
  • Monitoring and troubleshooting Linux in Azure

The class is taught by Sander van Vugt, a well-regarded Linux instructor and course developer for The Linux Foundation. He’s also a managing partner of ITGilde, a large co-operative in which about a hundred independent Linux professionals in the Netherlands have joined forces.

Read More Here

Article Credit: ZDNet

Go to Source

What is fog computing? Connecting the cloud to things

Fog computing is the concept of a network fabric that stretches from the outer edges of where data is created to where it will eventually be stored, whether that’s in the cloud or in a customer’s data center.

Fog is another layer of a distributed network environment and is closely associated with cloud computing and the internet of things (IoT). Public infrastructure as a service (IaaS) cloud vendors can be thought of as a high-level, global endpoint for data; the edge of the network is where data from IoT devices is created.

Fog computing is the idea of a distributed network that connects these two environments. “Fog provides the missing link for what data needs to be pushed to the cloud, and what can be analyzed locally, at the edge,” explains Mung Chiang, dean of Purdue University’s College of Engineering and one of the nation’s top researchers on fog and edge computing.

MORE AT NETWORK WORLDWhat is edge computing and how it will change the network +

According to the OpenFog Consortium, a group of vendors and research organizations advocating for the advancement of standards in this technology, fog computing is “a system-level horizontal architecture that distributes resources and services of computing, storage, control and networking anywhere along the continuum from Cloud to Things.”

Benefits of fog computing

Fundamentally, the development of fog computing frameworks gives organizations more choices for processing data wherever it is most appropriate to do so. For some applications, data may need to be processed as quickly as possible – for example, in a manufacturing use case where connected machines need to be able to respond to an incident as soon as possible.

Fog computing can create low-latency network connections between devices and analytics endpoints. This architecture in turn reduces the amount of bandwidth needed compared to if that data had to be sent all the way back to a data center or cloud for processing. It can also be used in scenarios where there is no bandwidth connection to send data, so it must be processed close to where it is created. As an added benefit, users can place security features in a fog network, from segmented network traffic to virtual firewalls to protect it.

Applications of fog computing

Fog computing is the nascent stages of being rolled out in formal deployments, but there are a variety of use cases that have been identified as potential ideal scenarios for fog computing.

Connected Cars: The advent of semi-autonomous and self-driving cars will only increase the already large amount of data vehicles create. Having cars operate independently requires a capability to locally analyze certain data in real-time, such as surroundings, driving conditions and directions. Other data may need to be sent back to a manufacturer to help improve vehicle maintenance or track vehicle usage. A fog computing environment would enable communications for all of these data sources both at the edge (in the car), and to its end point (the manufacturer).

Smart cities and smart grids Like connected cars, utility systems are increasingly using real-time data to more efficiently run systems. Sometimes this data is in remote areas, so processing close to where its created is essential. Other times the data needs to be aggregated from a large number of sensors. Fog computing architectures could be devised to solve both of these issues.

Read More Here

Article Credit: ComputerWorld

Go to Source




The weeks around New Year’s are in many ways our favorite time of year. There’s the first cold weather and snow, there’s the holidays and time spent with loved ones, and, of course, there’s the raft of articles predicting the trends to watch for the next year. And I don’t mean to minimize those prediction efforts—on the contrary, in an industry that moves as quickly as the cloud, they offer a great view into what trends, tech and services may be coming next. It also forces us all to think about what our organizations are doing in those areas and how we can use the tech and services to our advantage.

For 2018, I have six of my own predictions about the cloud industry that I think will have a meaningful impact on users. I’ll take a deep dive into a couple of them—security and containers—and a quicker look at market and technology trends, as well as how they might affect your company.

1. Security Stops Stopping Cloud Adoption

Valid or not, security has always been a common concern for organizations considering a move to the cloud. Given the improvements in cloud security, such concerns have been slightly overblown for a while, but they’re certainly understandable: it’s tough to give up control of something as important as security and leave it to an outside vendor.

For 2018, we’ll see many customers accept what security is in a cloud environment. Even if some organizations don’t feel that certain aspects of cloud security meet their expectations, they’ll still go to the cloud—it’ll just change what they do in the cloud. For example, companies will draw lines when it comes to cloud security. It won’t be a barrier to adoption in general, but it will be a factor in what companies choose to deploy. As the understanding of cloud security improves, companies will begin to accept what it is and adopt it as far as they feel comfortable.

2. Containers Become Another Tool in the Bag

Lots of excitement has surrounded containers over the past few years. We’ll continue to see growth in 2018; the industry isn’t mature just yet. But the thought that containers would change everything about the industry will focus elsewhere: in 2018, containers will become just another tool at developers’ disposal.

Don’t get me wrong, this change isn’t necessarily bad. Containers make provisioning and managing applications and instances much easier, and many organizations must move large amounts of data quickly, easily and nimbly between development and production—or even just for testing. This use of containers is completely valid. The value in containers is that they divide applications into bite-size pieces that allow developers more agility and give DevOps engineers greater management modularity.

So, no, maybe containers won’t change the world in 2018, but if you’re a company with large amounts of data, you’ll be happy to put them to work.

3. Object Commoditization

This may not be the sexiest prediction, but it’s one that has great ramifications for the bottom line: object storage will continue its path to commoditization. Organizations are having internal discussions about whether it matters where their data resides as long as it meets their security and performance requirements, is easily accessible and fits their budget.

With commoditization comes a big price reduction, which we’re already seeing with object storage. Another effect is that we’re starting to see more and more companies ask about getting off tape altogether—a trend that will accelerate in 2018. In large and midsize organizations, tape remains widely used, but as object-storage prices continue to drop, more and more companies will consider switching.

Read More Here

Article Credit: Data Center Journal

Go to Source