Categories
Explainer

How does AI work?

Part 2. Genetic Algorithms

Following on from my previous article on Neural Networks this is a short and simple primer on the subject of Genetic Algorithms.

Read Part 1. Neural Networks (although each article can be read independently)

Article by Simon Challinor
Photo by Public Domain Pictures from Pexels

If a creature was to have ten babies, each being very similar to their parent but slightly different, the chances of at least one of those offspring being superior at a particular task would be quite high.

This is one of the general concepts behind genetic algorithms (GAs).  However instead of creatures we are discussing algorithms and, of course, the most obvious question is; how can a computer program produce offspring?

If we think about nature; creatures, animals and even humans, we know that everything about us is controlled by our genes. Long sequences of ‘switches’ with each gene controlling a particular aspect of our appearance or behavior.

We also know that computer programs operate based on their set of variables.  Let’s consider a computer program that is written to assist engineers in designing bridges.  There are many variables to think about during the design process, including the choice of material, its thickness and positioning within a structure etc. but each of these decisions could be constrained to a set a possible values. For example we might limit the computer program to a choice of ten materials, thickness between a range of values etc. thereby reducing the computer program to a set of variables each with a set of potential values. Therefore the variables and the values in our example could be encoded into a data structure that resembles a gene sequence.

Read all cloud articles here or Read all articles subjects here
Check out our Product and Service Portfolio and see how we might be able to help you

Moving forwards, let’s say our engineering computer program actually operates by using this gene sequence of data.  When each gene changes the program will operate in a different way and the program will design a different type of bridge.

If we were to clone (or copy) our computer program many times and provide each copy with different gene values we would have many bridge designs.

Now its important to say that many of the bridges may not be a good design, they may be structurally unsound or unsuitable in numerous ways.  For this particular problem we must also be able to simulate how the bridges might perform in the real world, with calculations for gravity, wind and earthquakes being likely tests. In the field of genetic algorithms we call this a fitness function.

A fitness function is like a score and it rates how well the algorithm performs in its environment.

So part of the software here (the algorithms) are acting as the creative genius and another part of the software is able to assess and rate their designs within a simulated (but fixed) environment.

We could indeed simply randomize thousands of algorithms by providing random gene sequences but it would still be a very hit-and-miss process so instead we work in generations. We can produce, say, ten genetic algorithms and find the best one (which would probably not be very good, but better than the others).  We would then take the gene sequence from that algorithm and mutate that set of genes (again) ten times.  This is akin to the algorithm producing ten offspring, each being similar to itself but slightly different, and we would refer to the new set as ‘generation 2’.

In generation 2 we would expect to find some of the GAs to be worse than the parent and some of them to be better. By continually taking the best GAs within each generation and breeding from them we can slowly move towards a solution.

This process simulates a tiny part of evolution and Darwin’s theory of ‘survival of the fittest’.

So far we have covered mutation of genes from a single parent but there are many other aspects to Genetic Algorithms including crossover (the process by which we combine the genes from two or more parents) and selection (the method by which we chose the best GAs from within a population).

The final result of this process is to narrow in on a solution by leveraging the rules of natural selection.

In the field of artificial intelligence, genetic algorithms are just one method that permits creativity to arise from the binary depths of the computer rather than the mind of the programmer. 

Categories
Explainer

How does AI work?

Part 1. Neural Networks

AI (artificial intelligence) is a huge topic in recent years and a large part of the AI subject centres around Neural Networks, but how do they actually function? This article aims to be a short primer on the subject.

Article by Simon Challinor
Photo by Alex Knight from Pexels

Just like the human brain, neural networks in computing function very differently to traditional computer programs. 

In a traditional computing we have two parts, the architecture of the computer and the software that runs upon the architecture. However with artificial neural networks these two parts are the same thing.  In order to change the output the physical structure of the network must change.  

A traditional computer program is developed first and then used, generally in those two discreet steps.  However a neural network can be molded over time, or ‘trained’ as its normally called.   
I often think of a neural network like a piece of clay being forced into a mold. It’s slowly pushed into its environment until it starts to take the same shape.  When you extract the clay its a proxy for the environment, you can make assumptions or use that model to simulate the original environment. 

The physical structure of the network is also the actual logic of the program.  Like the brain, it consists of neurons, synapses, axons etc.  Neurons are like nodes and fire pulses to one another via the synapses and axon (connections).  However the intelligence isn’t solely in the neurons, in fact its mostly the structure of the connections between neurons that forms the logic of the program.

There are approximately 86 billion neurons in the human brain, which is, of course, a lot. However when you consider that there are 100 trillion connections between neurons (each forming only part a particular pathway) we can start to see why the brain’s specific shape can bring about individual experience, memory and knowledge. Artifical neural networks leverage the same concept.

When we train a neural network we are in fact changing the connectedness of a particular pathway.

As a child you have fewer connections between neurons. The process of accidentally burning yourself is a learning experience whereby the connections change, so that the pathways that caused the action are less likely to be taken again in the future. 
We may think of a synapse (connection between neurons) like a road.  By reducing the road from a three lane highway to a single lane the traffic is far more likely to take a different route.  
In neural networks this process of learning is often referred to as backwards propagation.

Read all cloud articles here or Read all articles subjects here
Check out our Product and Service Portfolio and see how we might be able to help you

We observe the output of a brand new network when we expose it to its environment.  We scold a child when they do something naughty and similarly we can ‘scold’ a neural network immediately after it produces a result that we don’t like.  Knowing the result is incorrect and the pathway that produced the incorrect result we can reduce all these synaptic connections by just a little (perhaps a few percent) and by doing that we are pushing the clay just a little more into the mold.

This phase of testing the network uses training data, where we know the results in advance. For example we might have ten thousand pictures of cats and dogs.  If the network says dog when it’s a cat, we know how to refine it.  Once we’re done we can let the network lose in the wild. 

Categories
DevOps

What is DevOps really?

If you’re anything like me, a developer who’s used to working in small dynamic teams throughout his 20 year career, the term DevOps can be a little confusing.

Article by Simon Challinor
Photo by Christina Morillo from Pexels

As the term became more common place, rather than hailing the arrival of a new approach, I found myself asking; hasn’t it always been this way? Haven’t we always been CD/CI (using Continuous Deployment, Continuously Integrating)?  How is it that we can only be XP (Xtream Programmers) now? 
As with most terminology, searching for the real meaning of these terms will leave you with various explanations and Google would provide you with numerous generic paragraphs. 

Ah well sir, we weren’t previously automating the entire build process.  Yes we were (Apache Ant, originated in 2000 and the Make build tool goes back much further).

“Well, we’re using Agile methodologies now and our teams are far more efficient.”  Speak for yourself, my teams were efficient before Agile and we used many of the same common sense approaches without calling it Agile.  Kanban is great, scrum meetings wonderful yes, yes, but we’ve always had meetings and we’ve always sat around a computer coding in pairs.  Only we didn’t find the need to define it as ‘Pair Programming’, write blogs about it and claim it as revolutionary.

DevOps is the meeting point of Development and Operations.  If you were previously a developer that worked in a siloed environment, where you were given a specification, went away and programed it for several months and then provided it to your operations team to deploy, then I understand that DevOps would be a refreshing and more dynamic approach. I’ve never seen a team that operated like that but according to Agile proponents it’s the world that we were saved from.

Read all cloud articles here or,
Check out our Product and Service Portfolio and see how we might be able to help you

Ok cynicism aside, I do find value in some of these terms and here’s why. The truth is that terms are only signposts, they point in a general direction and they can mean different things to different groups. For me the terms best describe the current generation of tools and exact workflows that we use now.  The words themselves are mostly meaningless so let’s not get so hung up on them.

Git, Docker, Kubernetes, Gradle, Jenkins etc… Tools that have made quantum leaps forward in how we develop and deploy, have actually shaped a particular zone of IT.

Where hardware and software previously had a boundary between them, with cloud and virtualization technologies the lines are far more blurred in recent years. Software not only controls the build of software, it also prescribes the requirements for the virtual hardware resources and launches the machines necessary to run the applications.  It maintains the life cycle of applications and describes the policies that are implemented when harware fails. 

In short DevOps is more of a ‘thing’ these days because of the expansion of IT in general. The problem is that the words we are using to describe very specific concepts are far too generic. 

Categories
Future

Cloud Resources as a Commodity

In my article tech predictions for the 2020s I discussed the relevance of a commodity market for cloud resources and why that might be a reality in the future. However I recently read an article making the claim that cloud computing does not (or would not) fall into the category of a commodity. 

One of the best points of the article was that commodities are nearly always natural resources such as wheat, corn, oil etc. Indeed that’s true but we do have energies; electricity, gas and various other utilities that are traded based on the capacity of their national grid systems. 

Article by Simon Challinor
Photo by energepic.com from Pexels

I think it still makes sense that cloud infrastructure will be seen as a high level utility and traded as a commodity similar to energy in the future.  However the current state of cloud computing, still in relative infancy, is not quite ready to be viewed as a commodity.  Cloud providers offer virtual machines (VMs being the units of computing) in all manor of combinations of virtual CPU, CPU generation, RAM, bandwidth etc. plus discounts are provided based on tenancy with 3 year duration, and payment upfront being significantly cheaper than the same VM on a pay-as-you-go basis. 

With all the complexity in product offerings and the variance between cloud providers there’s no proper benchmark for comparison of services, but there should be.  

Whist it feels we have moved forwards significantly since the days of datacenters and the traditional hosting company perhaps we have only taken a few steps into the concept of cloud.  I believe we should still consider the current paradigm of cloud computing to be an ‘early generation’.

The granularity of the product we are purchasing currently is too large to compare; a bag of apples vs a box of pears. Perhaps market forces will demand the product is bought and sold in simpler quantities such as machine cycles at which point we may be able to benchmark one provider with another and a homogenized commodity market would start to appear. 

Categories
Holidays

Happy New Year 2020

Categories
Future

Tech predictions for the 2020s

As we move forwards into 2020 its a good time to review, once again, on where we believe the general field of software and information technology is headed.

Article by Simon Challinor
Photo by Kaique Rocha from Pexels

Our Vanishing Devices

Over the last decade we have witnessed a continuous convergence of technologies especially related to mobile devices, where computing power has increased while simultaneously fitting into ever smaller form factors.  We are approaching a point where many tasks can theoretically be performed on the smallest of device with only the size of the screen and human interface dictating whether we prefer to use one particular device over another.

The cloud is becoming ever more prevalent and the rise of SaaS (Software as a Service) is likely still in its infancy.  It would seem only a continuation of the same trend if our devices were to disappear all together.

Indeed if all computing power was delegated to the cloud, the only elements necessary to remain present would be screens and human interface devices. Currently that last category would include the still ubiquitous mouse and keyboard combination.

On the subject of convergence it would seem logical that the various subdivisions between TVs, monitors or dedicated device screens would start to break down. A screen designed for a single function would have no place in the coming decade. 

Screen casting is becoming more mainstream but a single universal technology, shared throughout all the major hardware manufacturers is still missing. Apple is certainly one manufacturer that has been responsible for pushing their own proprietary technologies ahead of universal standards.  Conversely, the Universal Serial Bus, better known as the USB connection and associated technology, was developed in the 90s by seven companies; Compaq, DEC, IBM, Intel, Microsoft, NEC and Nortel. Its a shame but perhaps the current environment with Apple dominant is not conducive to the same leaps in compatibility between devices. 

Computing in the Cloud, Only

With processing and storage being pushed into the cloud our devices could become simpler.  This is similar to the ‘thin client’ and ‘virtual desktop’ concepts popularised in corporates from the 2000s onwards. Players such as Citrix provide server and client software for facilitating remote desktops; whereby the operating system and applications run in a server farm and only images of the desktop are transmitted to remote locations.

Over the last decade, simple Google Chrome-books (running mostly browser based applications) and cloud storage services like Dropbox, iCloud and Google Drive, designed for the general public, are a further continuation of the same trend.  This trend is pushing towards computing in the cloud … only. 

Hardware as a Commodity

Read about Virtual Computers here.

In an earlier article I considered the abstraction of computers and the separation of computer hardware away from the ‘virtual computers’ that temporarily exist within them.  This concept is especially relevant when considering ‘containerisation’ and ‘orchestration’ technologies like Docker and Kubernetes. These technologies allow virtual computers to jump from one physical piece of hardware to another with little effort.

However the physical raw ingredients of cloud computing itself are becoming a commodity.  The units of computing such as VMs (Virtual Machines) are provided by all the major players; Amazon (AWS), Microsoft Azure, AliCloud, Digital Ocean etc. Plus they all provide a number of data storage options. 

Therefore, as the next few years pass by, I would predict further homogenization of the market and eventually a single price for what is essentially the same commodity.  Like oil, corn and precious metals we have a demand that varies over time and the market price would fluctuate and be traded like other commodities too. In a similar way to energy, forward dated ‘futures’ contracts for processing power could be bought, sold and speculated upon as the maturity date approaches. 

Categories
Infrastructure

Hybrid Backup

When considering solutions for data storage and backup we need to be prudent.  I have already covered, in an earlier article, how the costs of cloud storage and data transfer can vary greatly depending on a multitude of different factors. 

 As with the world at large, many of my clients are in a transition phase between hosting technology onsite and within the cloud.  Some are dipping their toes while others are embracing the cloud and all the benefits it brings more readily.

Onsite NAS devices have been a staple technology in SME (Small Medium sized Enterprises) for many years but as cloud storage becomes more prevalent their place in the office may eventually be in question.  However, while cloud storage costs remain relatively high their place is still highly relevant. 

In this “Hybrid” environment, one solution that I have found useful is provided by QNAP within their NAS (Network Attached Storage) devices. 

 The QNAP NAS device provides an app, accessed via the admin web interface, called “Hybrid Backup Sync”.  It provides a set of tools to allow incremental backup and replication of local NAS data into the cloud.  The software can connect to all the major cloud providers (Alicloud, AWS, Google, Dropbox and many others). The fine grained controls assist the IT professional in creating scheduled jobs that match the data retention policies the company requires whilst being cost efficient with cloud storage and transfer. 

Categories
Infrastructure

SME Cloud Backup Considerations

Recently I have been working with a client on securing their critical data both onsite and offsite. 

Article by Simon Challinor
Photo by Gnist Design from Pexels

Like many companies, they have realised the benefits of moving their applications to the cloud and slowly reducing the need for maintaining their own servers (hardware) within their offices. 

In this kind of situation, I have long realised that the word ‘backup’ can be misleading and is often misused; an oversimplification of a process that requires a great deal of thought for each type of data.  The application the data is derived from will set precedence for how it is restored in the event of a problem.  The individual file sizes may vary greatly from one set of data to another and this will have an impact on the cost of storage and data transfer (especially within the cloud). 

Read all cloud articles here or,
Check out our Product and Service Portfolio and see how we might be able to help you

Cloud storage can be expensive but the major cloud players such as Microsoft Azure, AWS, AliCloud etc all provide numerous storage options at different price levels.  The speed of the storage medium is usually one of the key factors but providers also break down the challenge by other dimensions such as frequency of access, geographical region of availability and the reliability of the storage medium.  Within their OSS (Object Storage Service) product offering, AliCloud provides classes such as “Standard” , “Infrequent Access” and “Archive”.

Whilst Archive is the cheapest per gigabyte there are certain restrictions on how frequently the data can be accessed and how quickly that data can be retrieved in the event its needed. It can take up to a minute for data to be unfrozen from its Archive state before it can even start to be downloaded.

IT professionals tasked with designing backup solutions need to have a good knowledge of the underlying systems that created the data originally in order to make educated decisions on how that data should be stored and restored. Database dumps can be hundreds of Gigabytes per file and that data should be treated very differently to images, text documents or email files. 

Unfortunately backup is often delegated to juniors within the company or the remit is taken over by desktop support companies who had no part in designing the original systems.

Categories
Holidays

Merry Christmas 2019

Categories
Uncategorised

Git in Brief

Most British folk grow up knowing the word ‘Git’ as a derogatory and perhaps slightly offensive insult.  However, in the world of computer science and programming, it happens to be one of the most useful tools available to make developers lives easier. 

Put simply, Git is a ‘version control system’ and when several people are working together on the same document, it helps with managing a change log of small incremental modifications to the files in a directory.     

“He added this line of text, she removed that line, and then added this comment .. etc” 

Under normal circumstances, when you look at a folder of files on your computer, it’s a snap shot of those files as they are today.  However if Git was managing that folder, the files would have their history stored as a time line and you can determine what’s changed at certain points in time. Git is also a time machine for your files and you can travel forwards and backwards reverting to prior states. 

If you think of Git history as a train line then a Git “Commit” is a station.  At any station you may decide to “Branch” off and create a new line (with a different name) that diverges away from the original.  This is akin to “save as” and, for a period of time, we may work on a separate branch to avoid polluting the original “Master” branch.  We may experiment or develop new features here whilst the Master branch remains intact. However, with Git, it’s also possible to “Merge” the changes back into the Master branch at a later stage if we decide they should become part of the main line.