Big Data And AI: 30 Amazing (And Free) Public Data Sources For 2018


Big Data And AI

Big Data And AI

Machine learning, artificial intelligence, blockchains, predictive analytics – all amazing technologies which have promised to revolutionize business and society.

They are useless, however, without data. Fortunately for businesses and organizations which don’t have the resources to methodically collect every piece of useful information, they will need themselves, a huge (and growing) amount is available freely online.

Two years ago I wrote an article listing 33 sources of Big Data available for free online. Of course, in business technology terms that was a lifetime ago, so here’s an update with thirty new entries:

  1. World Bank Open Data Datasets covering population demographics and a huge number of economic and development indicators from across the world.
  2. IMF Data The International Monetary Fund publishes data on international finances, debt rates, foreign exchange reserves, commodity prices and investments.
  3. The US National Center for Education Statistics Data on educational institutions and education demographics from the US and around the world.
  4. The UK Data Centre The UK’s largest collection of social, economic and population data.
  5. FiveThirtyEight A large number of polls providing data on public opinion of political and sporting issues.
  6. FBI Uniform Crime Reporting The FBI is responsible for compiling and publishing national crime statistics, with free data available at national, state and county level.
  7. Bureau of Justice Here you can find data on law enforcement agencies, jails, parole and probation agencies and courts.
  8. Qlick Data Market Offers a free package with access to datasets covering world population, currencies, development indicators and weather data.
  9. NASA Exoplanet Archive Public datasets covering planets and stars gathered by NASA’s space exploration missions.
  10. UN Comtrade Database Statistics compiled and published by the United Nations on international trade. Includes Comtrade Lab which is a showcase of how cutting edge analytics and tools are used to extract value from the data.
  11. Financial Times Market Data Up to date information on financial markets from around the world, including stock price indexes, commodities and foreign exchange.
  12. Google Trends Examine and analyze data on internet search activity and trending news stories around the world.
  13. Twitter The advantage Twitter has over the others are that most conversations are public. This means that huge amounts of data is available through their API on who is talking about what, where, when and why.
  14. Google Scholar Entire texts of academic papers, journals, books and legal case law.
  15. Instagram As with Twitter, Instagram posts and conversations are public by default. Their APIs allow likes, mentions and business details to be analyzed.
  16. OpenCorporates The world’s largest open database of companies.

Read More Here

Article Credit: Forbes

Related Articles

Go to Source

How To Assure Quality Of Big Data and Analytics Solutions?

Big Data and Analytics Solutions

Big Data and Analytics Solutions

As Big Data and Data Analytics are advancing faster into our lives with practical applications being built around them and organizations leveraging the power of Big Data, it is important to glance sideways at the crucial factor of the success of Big Data, or almost any other applications, Software Quality Assurance.

The Recommendation, Prediction, and Decision systems have increased in popularity and have led to the demand for advancements and research in Big Data quality assurance space. There are a bunch of quality problems that lead to erroneous testing costs in the businesses and organizations.

According to a research paper published by Chuanqi Tao and Jerry Gao on ResearchGate, Big Data application Quality Assurance covers the study of various assurance methods, processes, criteria, standards, and systems to make sure that the quality of the Big Data application or system adheres to a set of quality parameters.

Big Data Testing, according to Chuanqi and Jerry, includes four necessary processes-

  • The testing of functions in Big Data systems like- intelligent algorithms, rich oracles, learning capabilities, and domain-specific methods.
  • The testing of Big Data system’s functions like security, robustness, system consistency, and Quality of Service.
  • The testing of Big Data System’s features like system evolution, visualization, usability, and so on.
    The testing of the Big Data System’s timelines, like lifetime testing, real-time testing, continuous testing, and testing other time-related features.

The research enlists the Quality factors that determine the goodness of the system as-

  • System Performance – The performance of the system like its availability, response time, throughput time, security, etc.
  • System Data Security – This factor evaluates and assesses the security aspect of the Big Data System, as Data Security is the one aspect that really haunts the enthusiasts and influencers in the field.
  • System Reliability – The durability of the system is evaluated by testing it for the performance of a specified function under the pre-stated conditions for a specified period of time. If the system responds in the manner that was expected, it is said to be reliable.
  • System Robustness – This factor leads to an understanding of how the system can resist change without adapting its initial configuration that was stable and secure.

According to Experian, 75 percent of businesses today are wasting 14 percent of their revenue due to the poor quality of data. It becomes imperative in this regard to make sure that the data is tested and filtered before it is applied for deriving knowledge and valuable insights.

A robust Quality Assurance framework is the need of the hour to better the insightful analysis of data and eventually enhance the end user experience.

In a blog by Rajni Sachan for TCS, she suggests a way to develop a framework for the software testing services and quality assurance of Big Data and Analytics projects.

Read More Here

Article Credit: CT

Go to Source

Time to apply big data analytics

big data analytics

big data analytics

As Parliament debates the 2018 Budget, it is timely to ask why there is no variance analysis of the 2017 actual expenditures versus the 2017 Budget.

This is a very basic management practice in many corporations and a critical tool in financial audits.

Perhaps this can be a priority project: the accountability of how public funds are actually spent.

To date, the focus has always been on the Budget itself. Such an approach has been suspected of encouraging spending simply because the budget has been allocated.

Sophisticated accounting tools have been available for the longest time. For instance, business dashboards are common management information tools used to track key performance indicators and other key data points relevant to any business or department.

Dashboards use data visualisations to simplify complex data sets to facilitate quick assessments and checks.

There is really no excuse for not leveraging existing and new tools to provide full accountability of both budgeted and actual expenditure numbers of taxpayer money.

Naysayers may think it is impossible to do this on a country level. They only need to look at the California state controller’s site at https://bythenumbers.sco.ca.gov/ to be persuaded otherwise.

California is only a state in America but its 2015-2016 revenues of US$78 billion (S$103 billion) and actual expenditures of US$74 billion are bigger numbers than the $89 billion expenditure planned for the 2018 Singapore Budget. It can be done, if there is a will to do so.

The California site is based on the open data concept, defined by the site as “online data that anyone can access, use and share… to encourage users to review, compare, visualise and analyse data and share their discoveries in real time… and has the potential to help the public identify wasteful spending and increase government efficiency as well as promote community involvement and improve California’s business climate”.

Such admirable objectives in themselves are enough for Singapore to emulate and adopt the open data concept.

Read More Here

Article Credit: ST

Go to Source

Big data boss

IMAGINE if your boss could take pre-emptive measures to retain employees who show signs of wanting to leave. Or if companies could get a handle on which potential hires would be the best fit – even before meeting them in person.

Sounds like the stuff of science fiction? Scenes like these are already playing out at some companies in Singapore, where a technological revolution is underway in the usually sedate field of human resources (HR).

While HR departments typically run on gut instinct rather than hard data, a growing number of companies are trying to apply a data-driven approach to managing staff.

Data analytics can help companies decide who to hire, tailor training programmes to suit staff requirements – even predict which employees are likely to leave. The data can come from a wide variety of sources, including employees’ leave applications, key performance indicator reports, email traffic, and social networking activity.

HR practitioners say “people analytics” – as these methods are called – have the potential to create better workplaces, boost employee-employer relationships and improve talent retention rates.

 Even as these technologies gain popularity, however, concerns are emerging over the implications for employees’ data privacy.

How are data and technology helping companies manage talent and retain staff? And how far should employers go in their efforts to learn more about employees?

Growing interest

Tech giant Google was – of course – among the pioneers in people analytics. In a bid to “build better bosses”, Google embarked on a plan code-named Project Oxygen in 2009.

The data-mining giant hired statisticians to trawl through performance reviews, feedback surveys and nominations for top-manager awards.

This analysis yielded a list of common behaviours among the best managers: employees value bosses who are good coaches, do not micromanage, and take an interest in employees’ lives and careers, among other characteristics.

While the findings hardly seem earth-shattering, it made a big splash mainly because the insights were backed up by hard data – revolutionary in a field that has traditionally relied mainly on intangibles. Interest in people analytics has exploded in the years since.

HR analytics roles in the Asia-Pacific region have more than doubled in the past 10 years, according to data from professional networking site LinkedIn. It estimates that there are now almost 40,000 HR analytics professionals in the region. And in the Asia Pacific, Australia, New Zealand and Singapore have the highest penetration of data or analytics skills in HR functions, LinkedIn data says. In Singapore, close to half of HR professionals are equipped with data or analytics skills.

Companies are clearly starting to pay attention to the potential that analytics holds for people management. Deloitte’s 2017 Global Human Capital Trends report – which polled 10,400 business and HR leaders across 140 countries – found that 71 per cent of respondents see people analytics as a high priority in their organisations.

Recruiting is the No 1 area of focus, followed by performance measurement, compensation, workforce planning and retention, the report found.

“The types of decisions that can be made using data range from micro decisions like how to recognise staff and how often, to enterprise-wide considerations like whether or not to downsize a particular division and the likely impact on productivity,” says Leong Chee Tung, co-founder and chief executive of employee engagement platform EngageRocket.

Read More Here

Article Credit: BT

Go to Source

IBM’s Big Bet On Cloud AI Will Pay Off

Cloud AI

Cloud AI

Gaining an advantage in the global high-tech market is often the result of research and development (R&D) combined with execution and incremental improvement. Today’s emerging cloud-based artificial intelligence (AI) platforms are a perfect example of R&D-fueled competition. With the cloud giants and many governments investing heavily in AI R&D, what’s a mere multinational corporation to do?

IBM IBM +0.12% spent the past few years restructuring to address the combination of “cognitive services” (AI-based services) and cloud platforms. During that time, IBM continued to invest in R&D, but focused their efforts on AI software, “as-a-service” initiatives and scale-out systems designed for delivering cloud-based analytics and AI services. As part of this transition, IBM opened its cloud infrastructure development to a much broader community and also partnered with other AI leaders like NVIDIANVDA +14.82% to provide high-performance AI systems.

In a few weeks, IBM will highlight all its efforts in AI at Think 2018, IBM’s combined partner, customer and developer conference, in Las Vegas. The following is what we expect to see at Think and why IBM should be successful in betting big on AI.

Partnerships for AI Acceleration

NVIDIA’s CEO Jensen Huang will speak at the Think opening keynote “The Journey to AI” on Tuesday morning. IBM partnered with NVIDIA over the past several years to engineer some of the most advanced GPU-accelerated server systems in the industry today. Aspects of NVIDIA NVLink high-speed interconnect technology were designed into IBM’s Coherent Accelerator Processor Interface (CAPI) and then opened to a much broader server ecosystem via the OpenCAPI Consortium.

IBM has also worked closely with Mellanox Technologies,Micron Technology MU +2.88% and Xilinx XLNX +1.43% on network and compute acceleration solutions. In addition, NVIDIA, Mellanox, Micron and Xilinx are all board members of OpenCAPI and Platinum members of the OpenPOWER Foundation. IBM formed the OpenPOWER Foundation to foster a multi-vendor ecosystem around IBM’s POWER architecture. The third OpenPOWER Summit will be collocated with Think this year.

Read More Here

Article Credit: Forbes

Go to Source

What Are YOU Looking At? Mind-Reading AI Knows

 Mind-Reading AI

Mind-Reading AI

Japanese scientists know what you’re looking at — but don’t worry, there’s no need to close your other browser tabs yet. Using an artificial intelligence (AI) system alongside fMRI scans, researchers were able to create an apparently mind-reading AI — “or perhaps at this point just mind skimming,” said Umut Güçlü, a researcher at Radboud University in the Netherlands who was not involved in the research, to New Scientist

The system is actually similar to AI technologies that have been used successfully to caption images. To do this for someone’s brain, the AI first needs an image of their brain taken with a fMRI scanner while the person is looking at an image. These scans show activity in the brain through blood flow.

The mind-reading AI isn’t always completely correct; in one of the tests, it thought a participant was looking at scissors, when they were looking at a clock. Yet even when wrong, it sometimes came tantalizingly close. For example, when one person being scanned was looking at an image of a man is kayaking in a river, the AI captioned it: A man is surfing in the ocean on his surf board.

In other cases, the AI was spot on: when the image was of a group of people standing next to each other, or of a black and white dog, the system was absolutely right.

The system presently has its limits. Images from fMRI don’t record all activity in the brain, and so there are boundaries to how detailed these captions can be. This method also requires a participant to lie in a large machine, making it poorly suited for use anywhere but in a medical facility.

While at-home applications might be far off, this type of technology could be used to support the development of brain-computer interfaces (BCIs). Emerging BCI tech uses small electrodes, as opposed to fMRI machines, to monitor brain activity. This research could potentially support these efforts and one day allow humans, with the help of their mind-reading AI, to control computers with only their minds. We’re nowhere near these abilities, but we can almost picture it now — and our AI would probably see it, too.

Read More Here

Article Credit: Futurism

Go to Source

Artificial Intelligence Is About To Dramatically Change The E-Learning Industry

The E-Learning Industry

The E-Learning Industry

E-learning has the potential to revolutionize education.

For one thing, the internet and burgeoning AI technology have made e-learning more accessible than ever before.

But e-learning also offers solutions to some of education’s most pressing challenges, and in the future, it could serve to more adequately provide all students access to quality teaching.

Here’s how.

1) E-learning can more meaningfully differentiate curriculum

Everyone processes content in different ways and at different speeds. The pace or style that works well for one student might very well be too fast, slow, or confusing for another.

Teachers have known this for a long time. But in the traditional classroom model, wherein one teacher with limited resources has to teach an entire class all at once, differentiating content to meet the varied needs of each student is impossible.

With E-learning, however, not only is such differentiation possible — it’s an integral component of the process.

E-learning equipped with intelligence can recognize students’ understanding of a concept automatically, and then suggest different paths of learning depending on the current level of mastery.

This will enable different students to master content at their own pace and will ensure that all students in a given class or community are successful.

2) AI can encourage individual tutoring

Content knowledge builds on itself. When you’re learning something new, you start off with the basics, build a foundation, and proceed conceptually from there.

This is why it’s so critical that students are able to ask questions as soon as points of confusion pop up.

But this is also one central problem with the traditional classroom model. In traditional classrooms, if students misunderstand some aspect of a lesson, they either have to wait until the end of the lesson to ask their questions or until their professor’s after-lesson office hours. That results in them missing all of the content that followed that moment of confusion.

E-learning technology equipped with new models of artificial intelligence, however, addresses points of confusion as soon as they arise. Artificial intelligence can act as a virtual tutor and answer questions on the fly.

In my experience teaching, I realized that many of the basic questions students ask are common among students, but asked repeatedly in different forms. AI enables tutors to understand the questions asked across forums and provide easier clarification to all students.

Read More Here

Article Credit: Forbes

Go to Source

Senate mulls offensive AI, new training tools and now Chinese faceswaps Trump

Roundup Your weekly dose of tidbits from the AI world, beyond everything we’ve already covered, begins with a senate committee hearing where a US lieutenant general, currently a nominee for the role of the director of the NSA, speaking about his concerns around the technology. And ends with a CEO of a Chinese AI startup demonstrating how AI can be used to perform a faceswap on Trump and Obama.

Lieutenant General Paul Nakasone, currently the commander of the United States Army Cyber Command, was quizzed by Senator Ted Cruz (R-TX) about his thoughts on AI.

The Senate Committee on Armed Services was considering the nomination of Nakasone for the role of director of the NSA, as well as Dr Brent Park to be deputy administrator for Defense Nuclear Nonproliferation for the National Nuclear Security Administration, and Anne White to be assistant secretary of energy for environmental management for the department of energy.

Senator Cruz brought up the idea of poisoning systems with adversarial examples, something that was discussed during the first congressional hearing on AI he chaired last year.

Cyberterrorism in the future will not occur by DDoSing or “bringing the system down”, Cruz said. “But far more subtly, simply changing the data in the big data datasets so that the AI algorithms reach the wrong results”.

Adversarial machine learning is a big area of study in research, which has shown that these systems are shockingly brittle. Fuzzing a few pixels here and there can fool convolutional neural nets.

Nakasone agreed and called data “the coin of the realm”. He didn’t really address the problem of adversarial examples, but did talk about the need to verify any changes to the organization’s code.

“So Senator, previously we thought of only securing out networks. And what we’ve certainly learned is the fact that securing our data, which I’d say is ‘the coin of the realm,’” he said.

“Our data is critical. Think of the dangers that are posed of our data is manipulated, whether or not it’s in our financial, our health, our national defence records – it’s very, very critical for what we are doing. But also think of the security for our weapon systems that go with it. The code that underlines our platforms, the code that underlines the critical capabilities that our army, navy, airforce, and marines rely on.”

“In terms of what must be done, I would offer that we have to think more broadly on term of defense and depth strategies as we look to the future. You highlighted the challenges of AI. Just as critical as AI might be for a terrorist, it’s critical for us to verify code.”

“To be able to have the capability to verify the integrity of our data. And so I do see this as one of the areas that both has tremendous positive impacts for our nation and one that we must be able to understand the limitations and the consequences as well.”

Another interesting question Cruz asked the lieutenant general was about how the NSA can compete with Silicon Valley for AI talent. There aren’t that many devs around with a specialized background in machine learning to fill the many roles created by the current hype.

Read More Here

Article Credit: The Register

Go to Source

New Algorithm Lets AI Learn From Mistakes, Become a Little More Human

Algorithm Lets AI Learn From Mistakes

Algorithm Lets AI Learn From Mistakes

AN AI THAT LOOKS BACK

In recent months, researchers at OpenAI have been focusing on developing artificial intelligence (AI) that learns better. Their machine learning algorithms are now capable of training themselves, so to speak, thanks to the reinforcement learning methods of their OpenAI Baselines. Now, a new algorithm lets their AI learn from its own mistakes, almost as human beings do.

The development comes from a new open-source algorithm called Hindsight Experience Replay (HER), which OpenAI researchers released earlier this week. As its name suggests, HER helps an AI agent “look back” in hindsight, so to speak, as it completes a task. Specifically, the AI reframes failures as successes, according to OpenAI’s blog.

“The key insight that HER formalizes is what humans do intuitively: Even though we have not succeeded at a specific goal, we have at least achieved a different one,” the researchers wrote. “So why not just pretend that we wanted to achieve this goal to begin with, instead of the one that we set out to achieve originally?”

Simply put, this means that every failed attempt as an AI works towards a goal counts as another, unintended “virtual” goal.

Think back to when you learned how to ride a bike. On the first couple of tries, you actually failed to balance properly. Even so, those attempts taught you how to not ride properly, and what to avoid when balancing on a bike. Every failure brought you closer to your goal, because that’s how human beings learn.

REWARDING EVERY FAILURE

With HER, OpenAI wants their AI agents to learn the same way. At the same time, this method will become an alternative to the usual rewards system involved in reinforcement learning models. To teach AI to learn on its own, it has to work with a rewards system: either the AI reaches its goal and gets an algorithm “cookie” or it doesn’t. Another model gives out cookies depending on how close an AI is to achieving a goal.

Both methods aren’t perfect. The first one stalls learning, because an AI either gets it or it doesn’t. The second one, on the other hand, can be quite tricky to implement, according to the IEEE Spectrum. By treating every attempt as a goal in hindsight, HER gives an AI agent a reward even when it actually failed to accomplish the specified task. This helps the AI learn faster and at a higher quality.

Read More Here

Article Credit: Futurism

Go to Source

AI Looms Over the City: the Kaiju Approach

I began tapping, it grew louder, then I thumped the lectern, then again (louder). That woke them up. I always enjoy seeing the mixture of shock, awe, gasps, disbelief, puzzlement and smiles as I speak about the New Fund Order: where Technology meets Finance meets Philosophy, meets Science Fiction.

Armed with my walking stick to accentuate points with dramatic gesturing motions, I would tell them that in a New Fund Order, Artificial Intelligence (AI) looms over all of us like a kaiju*, a giant 200 foot Godzilla. Bluntly put, Finance needs to find new solutions through Technology or become obsoleted by it.

*Kaiju (怪獣 kaijū) is a Japanese word that means “strange creature,” often translated as “monster” or “giant monster”

Today the looming threat of AI seems obvious and very real. Why then did Finance so badly underestimate the threat for 20 years? There was certainly a sense of complacent containment. To some extent the City of London had assumed it had controlled Technology after it moved to electronic trading in the 1986 ‘Big Bang’. The de-regulation and computerisation of the City.

Be it London or New York, Tokyo or Geneva the City, in its broadest catholic sense, had after all funded the automation of blue collar roles over 40 years, without fuss, without remorse, without resistance give or take the odd union strike. Indeed the systematic breakdown of worker unions and labour was key to reducing man hours, reduce workforces, to lower operating costs and allow companies to deploy CapEx into machines. There was a sense then that Finance held sway over the purse strings of Technology, the capitalisation of Silicon Valley and the Internet through the 1990s and 2000s. And that was true, then.

However since the Dotcom crash Technology had kept accelerating. A series of market crashes opened the way for Tech Disrupters. Those crashes combined with a collapse in trust in big finance and disintermediation of balance sheets of large pension schemes and banks. To now curb those ‘bankers’, the industry moved from self regulation back to statutory regulation. Investors began to put faith in technology over Finance. Assets also started to flow through index baskets, the influence of Finance was on the wane.
I have seen that change coming, having worked through 3 major market crashes in 1997, 2001 and 2008. Throughout I have worked for a long time as an educator, writing Finance textbooks and working towards better professionalism therein. My industry work now takes me into broader areas such as technology, costs, transparency, Environmental Social Governance (ESG), industry consolidation, product innovation, Investment Governance and supporting the growth of Fintech in my sector. Using my observations I was motivated in 2015 to write ‘New Fund Order’, discussing “digital death” for my profession and the underlying reasons.

It is somewhat fitting to note that, as a born Scotsman and where I am still based today, Scotland was the birthplace of analogue Finance and therefore apt to consider how might Scotland’s Finance Industry adapt to AI. Ground zero.

Finance 2.0 has been coming heavily from the US. Indeed halo US companies like Google (Alphabet), Microsoft, Facebook were not only changing Finance externally but also becoming the largest capitalised stocks within it. Asia Technology joined the party through Samsung, TenCent and Alibaba. This has created a huge feedback and validation of Technology as both a source of economic growth and social advancement on a global scale.

Consequently investors became more familiar with Technology in their own lives then they were becoming more attune to Technology managing their Finances. The desire for faster information, what we call the latency effect saw a rapid shift to individuals away from intermediaries. Now the public could ask information previously privileged to the few and without an information or technology advantage the role of Finance has become exposed. A simple fact utterly underestimated by the industry.

Over time Finance had been built on rules, laws and regulations, these were considered sacrosanct and unassailable; with a generous sprinkling of subjective bias and judgement. The industry had developed technology previously to be subservient and most importantly out of reach of clients. The number of times I heard about an actuarial consultant ‘turning the wheel’ on a long-standing model, all the time charging clients a quantum for the privilege. Frankly this is absurd industry complacency. It simply couldn’t hold up to inevitable transparent scrutiny. The likes of writer Martin Ford and MIT have noted the unrelenting substitution for white collar roles in other sectors, so too is change coming to Finance professions.

“the hurdle machines have to cross to out-perform humans with college degrees isn’t that high.” Martin Ford, Author ‘The Rise of the Robots’.

“The end game scenarios seem kind of severe. From here on in, it’s really, really, really going to change and it’s going to change faster than we can handle.” Matt Beane, MIT.

Yet the acceleration of technology alone could not not have unseated the rocksteady position of Finance so readily; it also took the latent issues within the establishment to bring about its own demise. Our own Monsters!

Why can’t we rely on the human condition alone to deliver effective Finance? Why does it break down or resist automation? Well, we assume only humans are capable of empathy, trust, making good decisions but consider. Alas the human condition is not itself constructive;

Karl Marx wrote “the more the division of labour and application of machinery extend, the more does competition extend among the worker”

While George Orwell wrote “on the whole humans want to be good but not that good and not all the time”
Lastly JP Morgan, forefather of modern Investment Banking said ‘someone does a thing for two reasons, a good reason and the real reason.”

Thus it is no surprise then that Finance has achieved both amazing things but also much to be criticised for. Our industry is a litany of poor practice, high fees, big bonuses, fraud, bubbles, manipulation, market crashes, Ponzi schemes. Unsurprising when the populous loses trust in the appointed class then it opens the door to monsters. Thus technology was no longer benign for its Finance masters.

Meanwhile you need only take a trip to the airport, open a newspaper or get into a taxi to see the marketing power of Finance. Since the 1980s Finance had become big business, deeply human intensive and highly compensated. That introduced strong economic incentives to resist change. When we talk about behaviours, the human condition is susceptible to many, driven by a desire to succeed, to make profit, fear, greed. Since the Great Financial Crisis, Asset Management had grown as capital moved from bank balance sheets and defined benefit pension schemes through to individual retirement accounts and into mutual funds. We called this disintermediation. It moved the capital at risk from employers to workers. Now workers were paying Finance directly, a form of taxation that helped the division of wealth as companies were released to focus on shareholder returns.

In response to the influx of assets, asset managers have increased their portfolio desks, salaries and bonuses swelled, fuelled by an endless supply of graduates, CFAs, MBAs and economic migrants, from investment banking, hedge funds and sell side research. More people lead to higher operating costs, complexities and inefficiencies to manage investor money. The Old Fund Order can be typified as;

• Human Intensive
• Human-Human
• Complex Value Chains
• High Salaries and Fees
• Information Advantage
• Fraud and Ponzi schemes

Where Finance has then become tested in recent years somewhat ironically is justifying its own economic value. So what is Optimum Economic Value (OEV)? In its purest form, we recognise the Finance value chain is itself an alignment between a customer and a financial outcome.Think of it as a piece of rope. How long and how straight is it, is it loose or is there a clear tension (a directness and transparency) between the two points?

Finance is defined by lots of parties involved in the front, middle and back office, it is people intensive. Think of a traditional portfolio of active managed funds sold by a distributor on the advice of a financial adviser, bundled into a retirement product, regulated, consulted, traded, operated, audited. Lots of pound signs. Having largely operated unfettered for 20 years, what exposed these value chains were two-fold;
The immutable growth of computing power known as Moore’s Law, the doubling of computing power at least every 2 years. Secondly Parkinson’s Law, a 1958 paper that observed that organisations became less efficient the more people you hire. We also call Parkinson’s Law ‘Coefficients of Inefficiency’ and historical examples have included; the Roman Empire, Greater London Council, the Civil Service, The Department of Defence, IBM, British car industry and Banks. Likewise Finance was quickly finding itself both outdated, inefficient and expensive.

Despite this, Finance for over a decade tried to box, restrict and otherwise compartmentalise ROBO, portraying it as a dim-witted ‘Robbie the Robot’ from 1956 Forbidden Planet. Big mistake! CitiGroup believed RoboAdvisors will hit $5 trillion AUM in the next decade. A more recent study by Deloitte estimated that “assets under automated management” (including hybrid offerings) in the U.S. will grow to U.S. $7 trillion by the year 2025 from about U.S.$300 billion today. More alarming (if you are a financial adviser) is that consultants A.T. Kearney predicts that assets under “robo-management” will total $2.2 trillion by 2021.
Another view of Fintech is that of the 1933 classic ‘KING KONG’. A loud chest-beater. All noise, but no bananas? Certainly this was the crux of the audience questions. Firstly there is a lot of noise, mostly from consultants and big business but change is happening and at an accelerating rate. Simply note how the make-up of Finsbury Square is changing and innovation is spilling out of Old Street into Threadneedle, into EC2. The very heart of the City.

No longer just noise then, what is being systematically removed is human intensity to be replaced by deep learning and AI. Until the late 1970s hundreds of clerks updated futures prices on chalkboards and recorded them on Polaroid film. Thousands of traders walked the pits, hundreds of thousands accountants, Actuaries, typing pools, administrators and computers processed, calculated, deliberated and predicted.. all gone! The first major electronic platform was Instinet, that could bypass the trading floor. Superseded in the 80s by Bloomberg and Archipelago, which began to replace floor traders. In 2000 there was over 150,000 involved in securities and commodities contracts in New York alone. In 2016 there were less than 100,000 yet the asset market has grown five fold since 2000.

Alternatively take Actuaries: In 2009, 110 students qualified to become Associates of the Faculty or the Institute of Actuaries, 335 qualifying as Fellows of the Faculty or the Institute of Actuaries. In 2016 the Institute and Faculty of Actuaries reported 29,000 members (December 2016), 52% of whom were students, 73% of members were 40 years old or under. With older actuaries retiring; what is the future for the next generation given traditional actuary roles are in demise? Martina King writing for the Actuarial Post.
“In the Insurance sector, reducing the number of highly skilled, highly paid actuaries by replacing them with technology is attractive. It’s a potentially scary prospect for actuarial careers.. there are few open positions for individuals with predictive analytical skills. In other sectors too, organisations are slotting into job ads the request for experience in machine learning. It’s worth the investment in gaining these skills to get ahead.”
We are seeing record numbers of CFAs, MBAs but a reduction in number of roles to fill, as incumbents work longer into life. Any role that is based mostly on rules rather than creative critical thinking are obviously at risk but ultimately all roles are in danger. Regulation key help preserve for a time driven by our desire for human accountability but this will ebb. Roles that will survive longer-term will adapt to work with AI. My own profession is not immune to this threat. The Fund Selection community is quickly waking up to the threat and something we are actively discussing at the Association of Professional Investors (APFI). We are now seeing a new wave of fund analysis, selection tools and digital fund warehousing, which is making fund selection more accessible and transparent. The information advantage is closing. AI will change how mutual funds are analysed and selected in future.

With the closing information gap, the challenges for Fund analysts can then be summarised as growing transparency, the lack of performance persistency from active fund managers, which has a knock-on effect onto Fund Selectors. Also the decomposition of activeness itself, into factors but also luck and risk-taking. Consider the following studies;

• Past Performance Persistency: Mark M. Carhart. Journal of Finance 1997, Blake and Timmermann 2003, SJ Brown 2006, Luckoff 2011, Barclays Capital 2012
• The effect of Marketing and Commission on Broker-sold funds: Del Guercio and Jonathan Reuter, University of Oregon, 2012, 2015
• Herustics and Behavioural Finance: Khaneman and Tversky, 1974, 2007, 2015,
• Active Share: Cremmer, Patijesto, 2010, 2013,

After all if Fund managers cannot add value then what value do the people who select them offer? Other issues include inducements, marketing, survivorship bias and behavioural biases. Obvious symptoms if this shift include the large shift towards index investing and Exchange Traded Funds (ETFs). What we are now moving towards is greater mass objectivity away from individual subjectivity. Performance and quant based analysis is rapidly becoming codified and automated. Meanwhile qualitative Fund analysis, which is itself judgemental based, too is undergoing change. Originating from the Harvard Marketing Mix and later management consulting like Booms & Bitner and Russell. Profile + Process + Portfolio + People + Price… + Pi? Such approaches are coming under scrutiny, as investors get better direct access to fund information. The very value of such analysis is now in doubt. What is now changing is the disruption from crowd research platforms that reduces the information advantage and thus premium of traditional analysts.

So if the value of human Intelligence is left in question, what then for Artificial Intelligence? AI has entered the daily narrative, the term “artificial intelligence” is applied when a machine mimics “cognitive” functions such as “learning” and “problem solving”. Today is hard to fathom just what the limitations are for AI. It certainly offers advantages both in the rapid assimilation of Big data, learning from changes in data and predictive modelling. This takes us back to our friend Robbie the Robot. Consider first that AI algorithms fall into many classes and have already permeated various corners of society;

• Search and optimisation: program synthesis
• Logic and information consciousness
• Probabilistic methods for uncertain reasoning
• Classifiers and statistical learning methods
• Neural: feed forward, recurrent, differentiable
• Control theory
• Languages and translation
• Evaluating progress and self-learning
• Complex Adaptive Systems

For example a simple model was already proposed by Ludwig and Pivioso in their 2005 machine-learning paper. They also considered what sort of Algo should a Fund Selection Robo adopt from 3 choices: Decision-Tree, Neural Network or a Naive Bayes approach. Ludwig and Pivioso concluded that all three approaches outperformed simple scoring models typically employed by human Fund Selectors. What is frightening was that this was achieved with only a simple array of data inputs. Let’s remind ourselves what these approaches do, Ludwig and Pivioso described them as;

“Decision-tree algorithms construct a flowchart-like structure where each node of the tree specifies a test of an attribute, each branch corresponds to an outcome of the test, and each leaf node represents a classification prediction.

Neural networks are represented by set of interconnected units, each unit has multiple inputs and produces a single output. The signals are weighted, transforming the incoming signals, weighted and passed to the output units.

Bayes – The classifier learns the conditional probability of each attribute value from the training data given the classification of each instance. To classify an unknown instance, Bayes’ theorem is applied to compute the probability of a particular class value given the attributes of the new instance.” Now consider the technology advancement and complexity of data available 13 years on since that paper. AI can now begin to replicate judgemental nudges and biases based on common material changes like price, attribution data, manager experience, tenure, benchmark, fund changes, moving firm, news flow and so on. I began to imagine if fund selection can be derived from AI: to screen thousands of funds and make judgements, shortlist recommendations, assess suitability and compatibility against a mandate or investor needs and monitor the outcome of those decisions. This is especially so when we consider key advances in Differentiable Neural Computing (DNC). What DNC does is create digital memory, DNC can literally read and rewrite memory, it becomes iterative. It was used to enable AlphaGo to beat the best Go player in the world across 250 to the power 150 possible moves. Secondly add program synthesis, a program like DeepCoder can literally data-mine and piggy-back other algorithms to solve any problem in seconds and there is around 30,000 GB of new data on the Internet every second for these programs to access.

This takes us the toughest question. Can the human condition still infect AI? It is the question that wrangles the industry today. Can ROBO act in a Fiduciary way, to put the interests of the client ahead of others? This is clearly a concern for regulators not only in terms of the original coding but also subsequent changes as a consequence of self- learning. Such protections could become hardwired, needs monitored and programmers regulated. Firstly AI can follow the Three Laws of Robotics by the science fiction author Isaac Asimov.

1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.
2. A robot must obey the orders given to it by human beings except where such orders would conflict with the First Law.
3. A robot must protect its own existence so long as such protection does not conflict with the First or Second Laws.

One of the key problems today in big firms is that Finance staff do not sufficiently understand code and coders do not understand Finance. Regulators often understand neither. This will require tomorrow’s coders to have both technical knowledge of Finance and programming expertise to satisfy a Fiduciary Test. This is where small Fintech start- ups can exploit this void, in the detail between Technology and Finance. Consider that if AI can be programmed to remove human error then what role is left for regulators? It will change. In response the Financial Conduct Authority proposed a new Certified regime (CP17-25) for anyone with responsibility for:

• approving the deployment of a trading algorithm or a material part of one
• approving the deployment of a material amendment to a trading algorithm or a material part of one, or the combination of trading algorithms
• monitoring or deciding whether or not the use or deployment of a trading algorithm is or remains compliant with the firm’s obligations

Ultimately the fiduciary duty falls initially to the programmer of the algorithm that instructs the programme to make decisions. Ultimately a regulated person has to be accountable for the programmer, the program and the outcomes. Taking these laws it is not unfathomable that computers can be programmed to put the interest of the client first and foremost to;

1. Uphold a fiduciary standard and all conflicts of interest must be disclosed. A computer has no conflicts unless they are first programmed. Like a driverless car its’ function is to serve the purpose without question.
2. A fiduciary has a “duty to care” and must continually monitor not only a client’s investments, but also their changing financial situation. A computer can monitor 24/7 continuously and is not restricted by fatigue or the adviser/fund selector’s diary. A sequence can be included if the client does not supply an update within x days or could be linked to the client’s accounts, email, diary and so on.
3. Understand changes to a client’s risk tolerance, perhaps after a painful bear market. Perhaps there was a family change. Under the suitability standard, the financial planning process could begin and end in a single meeting. For fiduciaries, that first client meeting marks only the beginning of the legal obligation. We have seen the term ‘orphan clients’, and humans have a great track record of dropping less profitable clients (value pools).
4. Monitor, adapt, assess fund changes. The reality is that many fund investors do not monitor their decisions often enough or with objectivity. They are susceptible to heuristic biases. Yet a computer can continuously monitor cost, turnover, risk, changes and performance. It can monitor twitter feeds, performance, fund manager commentary, portfolio positions, information supplied by the client, instructions, deal flow, thousands if not millions of data points analysed through neural networks.

Adding in Asimov rules into the AI subroutines become the safety net to ensure the program operates efficiently and investor aims are managed. AI can even offer the Robo Fund Selector a framework to set ESG criteria and identify better solutions to improve ethical and sustainable investing. It can employ new metrics to help investors understand their impact on Green House Gas emission, the economy and environment.
According to the New Fund Order, Finance structures will continue to change over the next two decades and beyond, an unrelenting digitalisation of the value chain. As mutual funds in turn become managed by AI then so too will humans become more challenged to understand, select and manage them. They need AI to solve the equation. How then to survive the kaiju, how to survive digital death? Be prepared, my lecture on AI Robo Fund Selection is available at FintechCircle Institute. https://institute.fintechcircle.com/courses/
course-v1:Beckett+C1+Programming_AI_Robo_Fund_Selection/about

I finished by saying ‘if this sounds all too monstrous then you’re probably right. Godzilla approaches.’ Cue nervous clap, more stunned disbelief and many interesting questions. A day later I was contacted by one of the students who had attended to say she would change the focus of her dissertation. Hope then.

Go to Source