A Deeper Understanding Is Needed To Improve Neural Networks

A Deeper Understanding Is Needed To Improve Neural Networks
Click here to view original web page at www.forbes.com

The development of neural networks is not a new thing. In fact, neural networks have been around since the 1940s, according to MIT News.

No one has really been interested in the application of this technology until now. To begin, let’s define a neural network. According to the definition by Investopedia: “A neural network is a series of algorithms that endeavors to recognize underlying relationships in a set of data through a process that mimics the way the human brain operates. Neural networks can adapt to changing input; so, the network generates the best possible result without needing to redesign the output criteria.”

The inspiration for algorithms comes directly from the biological function of the human brain. As MIT News notes, "The one-layer networks of the 1960s and the two- to three-layer networks of the 1980s [have blossomed] into the 10-, 15-, even 50-layer networks of today." So, when we talk about deep learning, in English, it means that today there is layer upon layer of networks all interconnected in functionality and communication.

Neural networks are systems of hardware and/or software patterned after the workings of human neurons in the human brain. There are layers upon layers of data all being associated to perform the act of learning and memory, known as artificial intelligence (AI). I believe that the marriage of data and artificial neural networks are our best chance to move toward real AI.

As we move further into the 21st century, we will be working alongside products built on deep learning technology. The problem is that the union of data analytics and neural networks (known as deep learning) are so new that data scientists, upper management and CEOs alike don’t really understand what can be done with this technology. Thus, how can this new technology ultimately bring advancement in the industry? The one thing I always make a point of understanding first and foremost in such situations is what can I understand about its limitations. So then let’s begin by understanding its shortcomings to avoid setting unrealistic expectations on this new technology.

Let’s start with data. You see neural networks are of no use unless there is data -- and a lot of it. The more data that is given to the ANNs, the better it learns and remembers. Compared to traditional linear machine learning algorithms, which tend to stagnate after a certain point, neural networks can truly grow with more data and more usage. The point is that deep learning technology will truly be independently intelligent one day, but only if it can consume large amounts of data to learn, comprehend and remember.

In our business, deep learning tools can make ID validation more accurate even though they may be more expensive. By training the neural network data units like facial landmarks, how to rotate and scale faces and other parameters we can accurately validate with high accuracy if an ID is real or fake. The key is the system must have enormous amounts of data that it can learn to produce results efficiently and effectively.

Will this technology really change that way we live in every sector? To answer that question, you must understand the problem you are trying to solve. In some cases, using traditional linear methods have better outcomes. Let me explain this in detail by first explaining another shortcoming of ANNs. Scientists know there is a black box effect in neural networks. This effect means they can’t really figure out how an ANN interprets data.

Artificial neural networks are nonlinear models used to approximate the relationship between input and output. In a nutshell, they are powerful predicting tools compared to linear methods. Thus, outcomes derived from ANNs are very difficult to interpret. They can approximate outcomes with minimal error. But minimal error is not good enough when you need a high-performance model that you must be able to interpret.

Let’s use the example of home loans given by a bank. If the banker was using an ANN to input data on a customer and the output approves the loan even though other banks denied the customer, then the banker must be able to interpret the rationale for the outcome as deemed by the ANN. This inability to truly interpret why a certain result was given is the shortcoming. In this example, you see the black box effect of ANNs and the inability to truly understand how a certain decision was reached has data scientists boggled.

Another example is when you input an image of a computer and the output by the ANN is a television; it is hard to interpret why the outcome was incorrect. The question CEOs and data scientists have is, how did the image get confused within the network layers? In business, you must be able to precisely, legitimately and confidently explain why certain decisions were made. This explanation is even more scrutinized when the output is given by AI and there is no real basis for unfavorable results.

Lastly, ANNs can be expensive compared to linear models like Random Forest. It all depends on the complexity of the data and layers of neural networks you are working with. It depends on your data and the intended use. A neural network requires more data preprocessing than Random Forest. There is a tendency for ANNs to have more hyperparameters, which can eat up training time compared to a Random Forest model. Adding trees (parameters) would not require significant training time or cost.

Deep learning, AI, deep neural networks, data analytics and machine learning will change the way we work and live in the future. But I believe that the current expectations may be a bit high. However, I believe that there is amazing potential in this technology. Using resources wisely with a comprehensive understanding of the problem and solution is the key to using neural networks to benefit your company and truly produce effective profitable outcomes.

Spread the love

Leave a Reply

Nature Knows Nootropics