Trending December 2023 # Choose Best Python Compiler For Your Machine Learning Project – Detailed Overview # Suggested January 2024 # Top 16 Popular

You are reading the article Choose Best Python Compiler For Your Machine Learning Project – Detailed Overview updated in December 2023 on the website Bellydancehcm.com. We hope that the information we have shared is helpful to you. If you find the content interesting and meaningful, please share it with your friends and continue to follow and support us for the latest updates. Suggested January 2024 Choose Best Python Compiler For Your Machine Learning Project – Detailed Overview

This article was published as a part of the Data Science Blogathon.

Introduction

programming language and has different execution environments. It has a wide range of compilers to execute the python programs eg. PyCharm, PyDev, Jupyter Notebook, Visual Studio Code, and many more. The compiler is a special program that is written in a specific programming language to convert the human-readable language i.e. high-level language to machine-readable language i.e. low-level language.

Image Source

So in this blog, I am going to cover my personal favorite top 6 python compilers that are useful for Python developers and data scientists. So let’s get started!

Image Source

List of Python Compilers

Here is a wide range of compilers to execute the python programs.

PyCharm

Spyder

Visual Studio Code

PyDev

Jupyter Notebook

Sublime Text

PyCharm

It is created by Jet Brains and it is one of the best and broadly utilized Integrated Development Environment (IDE). Developers utilize this IDE for creating gainful Python and creates perfect and viable code. The PyCharm IDE assists engineers with making greater profitability and gives savvy help to the developers. It helps developers to write good quality code correctly. It saves the developers time by performing the fast compilation.

Price: Free

Language Supported: English

Supported Platform: Microsoft Windows, Mac, Linux

Developed by: Jet Brains

Image Source

Features of PyCharm

It supports more than 1100 plugins

Provides an option to write own plugin

It has a code navigator, code editor, and fast & safe refactoring

It provides developers with an option to detect errors, fast fix errors and to complete auto code, etc.

It can be easily integrated with an IPython notebook.

It provides functionality to integrate debugging, deployments, testing, etc

Pros

It is very easy to use

Installation is very easy

Very helpful and supportive community

Cons

In the case of large data, it becomes slow

Not beginners friendly

Check the official page here: PyCharm

Spyder

Price: Free

Language Supported: English

Supported Platform: Microsoft Windows, Mac, Linux

Developed by: Pierre Raybaut

Image Source

Features

Provides auto-code completion and syntax highlighting feature

It supports multiple IPython consoles

With the help of GUI, it can edit and explore the variables

It provides a debugger to check the step by step execution

User can see the command history in the console

Pros

It is open-source and free

To improve the functionalities, it supports additional plugins

Provide support for strong debugger

Cons

The very old style interface

Difficult to find the terminal in this compiler

Check the official page here: Spyder

Visual Studio Code

This IDE is developed by Microsoft in 2023. It is free and open-source. It is lightweight and very powerful. It provides features such as unit testing, debugging, fast code completion, and more. It has a large number of extensions for different uses, for example, if you want to use C++, then install C++ extension, similarly install the different extension for different programming languages.

Price: Free

Language Supported: English

Supported Platform: Microsoft Windows, Mac, Linux

Developed by: Microsoft

Features

It has an inbuilt Command Line Interface

It has an integrated Git that allows users to commit, add, pull and push changes to a remote Git repository utilizing a straightforward GUI.

It has an API for debugging

Visual Studio Code Live Share is an element that empowers you to share your VS Code case, and permit somebody distant to control and run different things like debuggers.

Pros

It supports multiple programming languages eg. Python, C/C++, Java etc

Provides auto-code feature

It has built-in plugins

Cons

Sometimes, it crashes and shutdowns

The interface isn’t all that great and it required some time to begin

Check the official page here: Visual Studio Code

PyDev

PyDev is free and open-source, people can introduce it from the web and begin utilizing it. It is perhaps the most usable IDE and liked by a large portion of developers.

Price: Free

Language Supported: English

Supported Platform: Microsoft Windows, Mac, Linux

Developed by: Appcelerator

Features

It provides functionalities such as debugging, code analysis, refactoring, etc

Provides error parsing, folding of code, and syntax for highlighting code.

It supports black formatted, virtual environment, PyLint, etc

Pros

It supports Jython, Django Framework, etc

It offers supports for different programming languages like Python, Java, C/C++, etc

Provides auto-code completion and syntax highlighting feature

Cons

When multiple plugins are installed, the performance of PyDev diminishes

Check the official page here: PyDev

Jupyter Notebook

It is one of the most widely used python IDE for data science and machine learning environments. It is an open-source and web-based interactive environment. It permits us to create and share documents that have mathematical equations, plots, visuals, live code, and readable text. It supports many languages such as Python, R, Julia, etc but it is mostly used for Python.

Price: Free

Language Supported: English

Supported Platform: Microsoft Windows, Mac, Linux

Developed by: Brian Granger, Fernando Perez

Image Source

Features

Easy collaboration

Provides the option to download jupyter notebook in many formats like PDF, HTML file etc

It provides presentation mode

Provides easy editing

Provides cell level and selection code execution that is helpful for data science

Pros

It is beginners friendly and perfect for data science newbies.

It supports multiple languages like Python, R, Julia, and many more

With the help of data visualization libraries such as matpotlib and seaborn, we can visualize graphs within the IDE

It has a browser-based interface

Cons

It doesn’t provide a good security

It doesn’t provide code correction

Not effective in real-world projects – use only for dummy projects

Check the official page here: Jupyter Notebook

Sublime Text

Sublime Text is an IDE that comes in two renditions for example free and paid. The paid variant contains additional highlights features. It has different plugins and is kept up under free software licenses. It upholds numerous other programming languages, for instance, Java, C/C++, and so on not just Python.

Sublime Text is very quick when contrasted with other text compilers. One can likewise introduce different bundles like debugger, code linting, and code completion.

Price: Free

Language Supported: English

Supported Platform: Microsoft Windows, Mac, Linux

Developed by: Jon Skinner

Image Source

Features

Provides option for customization

Instant switch between different projects

It provides split editing

It has a Goto Anything option, that allows user to jump the cursor wherever they want

It supports multiple languages such as Python, java, C/C++

It provides Command Palette

It has a distraction-free mode too

Pros

Very interactive interface – very handy for beginners

Provide plugin which is very helpful in debugging and tet highlighting.

Provide time to time suggestion for accurate syntax

It provides a free version

Working on different projects are possible at the same time

Cons

Not wors well in case of large documents

One of the most annoying things is, it doesn’t save documents automatically.

At some time, plugins are difficult to handle.

Check the official page here: Sublime Text

Frequently Asked Questions

Q1. Which compiler is best for Python?

A. Python is an interpreted language, and it does not require compilation like traditional compiled languages. However, popular Python interpreters include CPython (the reference implementation), PyPy (a just-in-time compiler), and Anaconda (a distribution that includes the conda package manager and various scientific computing libraries).

Q2. What is a Python compiler?

A. In the context of Python, a compiler is a software tool that translates Python code written in high-level human-readable form into low-level machine code or bytecode that can be executed directly by a computer. The compiled code is typically more efficient and faster to execute than the original Python source code. Python compilers can optimize the code, perform static type checking, and generate standalone executable files or bytecode files that can be run on a specific platform or within a virtual machine. Examples of Python compilers include Cython, Nuitka, and Shed Skin.

Conclusion

So in this article, we have covered the top 6 Python Compilers For Data Scientists in 2023. I hope you learn something from this blog and it will turn out best for your project. Thanks for reading and your patience. Good luck!

You can check my articles here: Articles

Connect me on LinkedIn: LinkedIn

The media shown in this article are not owned by Analytics Vidhya and is used at the Author’s discretion. 

Related

You're reading Choose Best Python Compiler For Your Machine Learning Project – Detailed Overview

Tracking Your Machine Learning Project Changes With Neptune

Introduction

Working as an ML engineer, it is common to be in situations where you spend hours to build a great model with desired metrics after carrying out multiple iterations and hyperparameter tuning but cannot get back to the same results with the same model only because you missed to record one small hyperparameter.

What could save one from such situations is keeping a track of the experiments you carry out in the process of solving an ML problem.

If you have worked in any ML project, you would know that the most challenging part is to arrive at good performance – which makes it necessary to carry out several experiments tweaking different parameters and tracking each of those.

You don’t want to waste time looking for that one good model you got in the past – a repo of all the experiments you carried out in the past makes it hassle-free.

Just a small change in alpha and the model accuracy touches the roof – capturing the small changes we make in our model & their associated metrics saves a lot of time.

All your experiments under one roof – experiment tracking helps in comparing all the different runs you carry out by bringing all the information under one roof.

Should we just track the machine learning model parameters?

Well, No. When you run any ML experiment, you should ideally track multiple numbers of things to enable reproducing experiments and arriving at an optimized model:

Image 1

Code: Code that is used for running the experiments

Data: Saving versions of the data used for training and evaluation

Environment: Saving the environment configuration files like  ‘Dockerfile’,’requirements.txt’ etc.

Parameters: Saving the various hyperparameters used for the model.

Metrics: Logging training and validation metrics for all experimental runs.

Whether you are a beginner or an expert in data science, you would know how tedious the process of building an ML model is with so many things going on simultaneously like multiple versions of the data, different model hyperparameters, numerous notebook versions, etc. which make it unfeasible to go for manual recording.

Fortunately, there are many tools available to help you. Neptune is one such tool that can help us track all our ML experiments within a project.

Let’s see it in action!

Install Neptune in Python

In order to install Neptune, we could run the following command:

pip install neptune-client

For importing the Neptune client, we could use the following line:

import chúng tôi as Neptune

Does it need credentials?

We need to pass our credentials to the neptune.init() method to enable logging metadata to Neptune.

run = neptune.init(project='',api_token='') Logging the parameters in Neptune

We use the iris dataset here and apply a random forest classifier to the dataset. We consequently log the parameters of the models, metrics using Neptune.

from sklearn.datasets import load_iris from sklearn.ensemble import RandomForestClassifier from sklearn.model_selection import train_test_split from sklearn.metrics import f1_score from joblib import dump data = load_iris() X_train, X_test, y_train, y_test = train_test_split(data.data, data.target, test_size=0.4, random_state=1234) params = {'n_estimators': 10, 'max_depth': 3, 'min_samples_leaf': 1, 'min_samples_split': 2, 'max_features': 3, } clf = RandomForestClassifier(**params) clf.fit(X_train, y_train) y_train_pred = clf.predict_proba(X_train) y_test_pred = clf.predict_proba(X_test) train_f1 = f1_score(y_train, y_train_pred.argmax(axis=1), average='macro') test_f1 = f1_score(y_test, y_test_pred.argmax(axis=1), average='macro')

To log the parameters of the above model, we could use the run object that we initiated before as below:

run['parameters'] = params

Neptune also allows code and environment tracking while creating the run object as follows:

run = neptune.init(project=' stateasy005/iris',api_token='', source_files=['*.py', 'requirements.txt'])

Can I log the metrics as well?

The training & evaluation metrics can be logged again using the run object we created:

run['train/f1'] = train_f1 run['test/f1'] = test_f1   Shortcut to log everything at once?

We can create a summary of our classifier model that will by itself capture different parameters of the model, diagnostics charts, a test folder with the actual predictions, prediction probabilities, and different scores for all the classes like precision, recall, support, etc.

This summary can be obtained using the following code:

import neptune.new.integrations.sklearn as npt_utils run["cls_summary "] = npt_utils.create_classifier_summary(clf, X_train, X_test, y_train, y_test)

folders on the Neptune UI as shown below:

What’s inside the Folders? 

The ‘diagnostic charts’ folder comes in handy as one can assess their experiments using multiple metrics just with one line of code on the classifier summary.

The ‘all_params’ folder comprises the different hyperparameters of the model. These hyperparameters help one to compare how the model performs at a set of values and post tuning them by some levels. The tracking of the hyperparameters additionally helps one to go back to the exact same model (with the same values of hyperparameters) when one needs to.

The trained model also gets saved in the form of a ‘.pkl’ file which can be fetched later to use. The ‘test’ folder contains the predictions, prediction probabilities, and the scores on the test dataset.

We can get a similar summary if we have a regression model using the following lines:

import neptune.new.integrations.sklearn as npt_utils run['rfr_summary'] = npt_utils.create_regressor_summary(rfr, X_train, X_test, y_train, y_test)

Similarly, for clustering as well, we can create a summary with the help of the following lines of code:

import neptune.new.integrations.sklearn as npt_utils run['kmeans_summary'] = npt_utils.create_kmeans_summary(km, X, n_clusters=5)

Here, km is the name of the k-means model.

How do I upload my data on Neptune?

We can also log csv files to a run and see them on the Neptune UI using the following lines of code:

run['test/preds'].upload('path/to/test_preds.csv') Uploading Artifacts to Neptune

Any figure that one plot using libraries like matplotlib, plotly etc. can be logged as well to Neptune.

import matplotlib.pyplot as plt plt.plot(data) run["dataset/distribution"].log(plt.gcf())

In order to download the same files later programmatically, we can use the download method of ‘run’ object using the following line of code:

run['artifacts/images'].download()

Final Thoughts

In this article, I tried to cover why experiment tracking is crucial and how Neptune can help facilitate that consequently leading to an increase in productivity while conducting different ML experiments for your projects. This article was focused on ML experiment tracking but we can carry out code versioning, notebook versioning, data versioning, environment versioning as well with Neptune.

There are of course many similar libraries available online for tracking the runs which I would try to cover in my next articles.

About Author

Nibedita Dutta

Nibedita is a master’s in Chemical Engineering from IIT Kharagpur and currently working as a Senior Consultant at AbsolutData Analytics. In her current capacity, she works on building AI/ML-based solutions for clients from an array of industries.

Image Source

The media shown in this article is not owned by Analytics Vidhya and are used at the Author’s discretion.

Related

Top Certifications For Sas, R, Python, Machine Learning Or Big Data

We released our rankings for various long duration analytics programmes in India for 2014 – 15 last week. They were greeted with unparalleled enthusiasm and response from our audience. We continue our journey to help our audience decide the best analytics trainings and resources.

This week, we will focus on ranking short duration courses or certification courses.

Latest Rankings: Top Certification Rankings of  2023 – 2023

Scope of these rankings:

In this round, we are ranking various short duration certification courses accessible in India. So the consideration set will include courses either run in India or are available online. We will exclude courses / certifications / boot camps running in other countries.

Why rank certification courses and not institutes?

One of the conscious calls, we took while coming up with the rankings was to rank courses and not the institutes. Why? Because that is how we think and that is what we need to make our learning decisions. We usually need to find the best courses specific to learn a language or a tool. You would want to do the course which is best to learn R or SAS or Python. Institute ranking would not be the right way to make these decisions.

Hence, we decided to rank the courses individually and not institutes. So, here are the ranks for various courses:

Certification courses for SAS:

Foundation Course in Analytics by Jigsaw Academy: Foundation course from Jigsaw Academy is an ideal first course for your data science career, if you want to learn SAS. The content is lucid and leaves you with enough knowledge on the subject to start your data science career. The coverage of the course is holistic as well – it covers everything from collecting and cleaning data to how to build various predictive models. What I really like about this course is that it makes the journey of becoming a data scientist easier. With its simple step-by-step approach, it is an ideal course for those who come from a non-statistics or a non-programming background.

SAS Institute – Predictive modeler: Predictive Modeler certification from the SAS institute is probably the best short term certification available on SAS. Typically run over 5 days, this course assumes that you know Base SAS and have been using it for about 6 months (SAS also offers a Base SAS certification separately). The reason why this course has been ranked second is because of the cost. SAS charges INR 75k+ for this certification. There can be travel costs over and above and if you want to learn Base SAS as well, you would double up the cost – pretty hefty for a 10 day course. For the motivated folks, SAS institute has started offering 2 courses online for free. You can do them to start, then practice for a few days and take up this course. This will save the cost to some extent, but you still need to shell out a fortune for this course.

Certified Business Analytics Professional by Edvancer Eduventures: A lower cost option compared to the first 2 courses. This course from Edvancer covers SAS and predictive modeling comprehensively. Edvancer provides a good proposition to get 60 hours of instructor led trainings at a relatively low cost. Definitely check them out if cost is a significant constraint for you.

There are other courses offered in the industry from the likes of AnalytixLabs, EduPristine, Analytics Training Institute which are not comprehensive from the stand point of becoming a predictive modeler. You should only consider them if cost is the only factor for you to decide.

Must Read: A step by step guide to learn SAS from the scratch

Certification courses for R:

Data Science specialization on Coursera: Probably the most definitive set of courses available for free. These are easy to follow, 2 – 3 hours per week per course. You can pick the courses you need and avoid the ones which you don’t need immediately or do them simultaneously, if you have more bandwidth. The only downside to this certification is lack of guidance from a mentor. You need to rely on forums for that role.

The Analytics Edge on edX: One of the most intensive course to pursue, this course requires you to spend 15 – 16 hours every week. And if you do put them, it covers everything you need to learn with R in less than 4 months. By end of this course, you will be competing on a competition on Kaggle!

Data Science Certification from Jigsaw Academy: Again a comprehensive offering from Jigsaw with good quality content and instructors. This course provides you with all you need to know to become a data scientist. The only complain I have about the course is the cost – INR 26K for self paced and INR 42k for Instructor led might seem high, given that there are a lot of free resources available on R. On the flip side, this course will expose you to business case studies and real world problems better than any other course I have come across. If you are confident about your ability to pick up complex knowledge, stick to the free courses. But, if you are intimidated by statistics or programming and feel you need some hand holding, Jigsaw is the ideal place to learn.

Business Analytics with R from Edureka: A cost effective instructor led offering from Edureka, which covers the concepts well. You can also consider the data science offering from them for slightly higher fees and get functional knowledge about Hadoop and Machine Learning with R as well.

Certified R Programmer from Edvancer: This offering from Edvancer tries to serve people in the middle – those who are motivated enough to learn by self paced tutorials, but still need on demand support to help them out at times. Given that there is no dearth of self paced videos / tutorials on R, you should consider this course for its on demand support.

Data Analysis with R from Udacity also covers basic exploratory data analysis in R, but does not provide enough learning to build predictive models.

Must Read: A step by step guide to learn data science in R Programming

Certification courses for Python:

Mastering Python by Edureka: I personally like Python as a tool for data science. The ecosystem for Python is still evolving. Hence, it is difficult to find courses as comprehensive in Python as this one from Edureka. The course starts from basics of Python and goes on to make sure that you can apply machine learning using Python. One of the best offering to learn Python for data science.

Intro to Data Science by Udacity: Udacity has a whole bunch of courses which assume that you know Python. This particular course is a good introduction to Pandas and data wrangling using Pandas. While the course is a good introduction, it falls short on comprehensiveness and does not cover all your needs as a data scientist through this course.

Must Read: A comprehensive guide which teaches Python from the scratch

Certification courses for Machine Learning:

Machine Learning by Andrew Ng on Coursera: I would probably not be wrong, if I call this course as the most popular course on Machine Learning. Prof. Andrew Ng explains even the most complicated topics in easy to understand manner. A must do course if you want to learn Machine Learning from scratch

Learning from Data on edX: One of the most intensive course run by Prof. Abu-Mustafa. The course contains some really intensive exercises and assignments. The course is not for the people with light heart, but for those who can endure – this is the best course on the subject.

Machine Learning courses from Udacity: The machine learning offerings from Udacity fall some where in between the two courses mentioned above. They don’t simplify the subject matter to the extent Prof. Andrew Ng did and the problem sets are also not as intensive as the course on edX.

Must Read: A beginners guide to conquer Machine Learning

Certification courses for Big Data:

Big Data and Hadoop from Edureka: Although the course does not come with Wiley certification, it is a very cost effective option. The course covers Big Data and Hadoop ecosystem in good details and is clearly the most popular course from Edureka offerings.

A few other courses worth mentioning here are MongoDB fundamentals on Udacity and Mining massive datasets on Coursera. As the name suggests, the course on MongoDB provides you all the basics of working with MongoDB. Mining massive datasets on the other hand is a blend of machine learning and Big Data.

Short note about the methodology:

You can read more details about our methodology here. We ranked the courses on 4 parameters:

Breadth of the coverage

Quality of the content

Industry recognition

Value for Money

End Notes: If you like what you just read & want to continue your analytics learning, subscribe to our emails, follow us on twitter or like our facebook page.

Related

An Overview Of Regularization Techniques In Deep Learning (With Python Code)

Introduction

One of the most common problems data science professionals face is to avoid overfitting. Have you come across a situation where your model performed exceptionally well on train data but was not able to predict test data. Or you were on the top of the competition on the public leaderboard, only to fall hundreds of places in the final rankings? Well – this is the article for you!

Avoiding overfitting can single-handedly improve our model’s performance.

In this article, we will understand the concept of overfitting and how regularization helps in overcoming the same problem. We will then look at a few different regularization techniques and take a case study in python to further solidify these concepts.

Note: This article assumes that you have basic knowledge of neural networks and their implementation in Keras. If not, you can refer to the below articles first:

What is Regularization?

Regularization is a technique used in machine learning and deep learning to prevent overfitting and improve the generalization performance of a model. It involves adding a penalty term to the loss function during training. This penalty discourages the model from becoming too complex or having large parameter values, which helps in controlling the model’s ability to fit noise in the training data. Regularization methods include L1 and L2 regularization, dropout, early stopping, and more.

Before we deep dive into the topic, take a look at this image:

In other words, while going towards the right, the complexity of the model increases such that the training error reduces but the testing error doesn’t. This is shown in the image below.

Source: Slideplayer

If you’ve built a neural network before, you know how complex they are. This makes them more prone to overfitting.

Regularization is a technique which makes slight modifications to the learning algorithm such that the model generalizes better. This in turn improves the model’s performance on the unseen data as well.

How does Regularization help reduce Overfitting?

Let’s consider a neural network which is overfitting on the training data as shown in the image below.

If you have studied the concept of regularization in machine learning, you will have a fair idea that regularization penalizes the coefficients. In deep learning, it actually penalizes the weight matrices of the nodes.

Assume that our regularization coefficient is so high that some of the weight matrices are nearly equal to zero.

Such a large value of the regularization coefficient is not that useful. We need to optimize the value of regularization coefficient in order to obtain a well-fitted model as shown in the image below.

Different Regularization Techniques in Deep Learning

Now that we have an understanding of how regularization helps in reducing overfitting, we’ll learn a few different techniques in order to apply regularization in deep learning.

L2 & L1 regularization

L1 and L2 are the most common types of regularization. These update the general cost function by adding another term known as the regularization term.

Cost function = Loss (say, binary cross entropy) + Regularization term

However, this regularization term differs in L1 and L2.

In L2, we have:

Here, lambda is the regularization parameter. It is the hyperparameter whose value is optimized for better results. L2 regularization is also known as weight decay as it forces the weights to decay towards zero (but not exactly zero).

In L1, we have:

In this, we penalize the absolute value of the weights. Unlike L2, the weights may be reduced to zero here. Hence, it is very useful when we are trying to compress our model. Otherwise, we usually prefer L2 over it.

In keras, we can directly apply regularization to any layer using the regularizers.

Below is the sample code to apply L2 regularization to a Dense layer.

from

keras

import

regularizers

model.add(Dense(

64

, input_dim=

64

,

               kernel_regularizer=regularizers.l2(

0.01

)

Note: Here the value 0.01 is the value of regularization parameter, i.e., lambda, which we need to optimize further. We can optimize it using the grid-search method.

Similarly, we can also apply L1 regularization. We will look at this in more detail in a case study later in this article.

Dropout

This is the one of the most interesting types of regularization techniques. It also produces very good results and is consequently the most frequently used regularization technique in the field of deep learning.

To understand dropout, let’s say our neural network structure is akin to the one shown below:

So each iteration has a different set of nodes and this results in a different set of outputs. It can also be thought of as an ensemble technique in machine learning.

Ensemble models usually perform better than a single model as they capture more randomness. Similarly, dropout also performs better than a normal neural network model.

This probability of choosing how many nodes should be dropped is the hyperparameter of the dropout function. As seen in the image above, dropout can be applied to both the hidden layers as well as the input layers.

Source: chatbotslife

Due to these reasons, dropout is usually preferred when we have a large neural network structure in order to introduce more randomness.

In keras, we can implement dropout using the keras core layer. Below is the python code for it:

from chúng tôi import Dropout model = Sequential([ Dense(output_dim=hidden1_num_units, input_dim=input_num_units, activation='relu'), Dropout(0.25), Dense(output_dim=output_num_units, input_dim=hidden5_num_units, activation='softmax'), ])

As you can see, we have defined 0.25 as the probability of dropping. We can tune it further for better results using the grid search method.

Data Augmentation

The simplest way to reduce overfitting is to increase the size of the training data. In machine learning, we were not able to increase the size of training data as the labeled data was too costly.

But, now let’s consider we are dealing with images. In this case, there are a few ways of increasing the size of the training data – rotating the image, flipping, scaling, shifting, etc. In the below image, some transformation has been done on the handwritten digits dataset.

This technique is known as data augmentation. This usually provides a big leap in improving the accuracy of the model. It can be considered as a mandatory trick in order to improve our predictions.

In keras, we can perform all of these transformations using ImageDataGenerator. It has a big list of arguments which you you can use to pre-process your training data.

Below is the sample code to implement it.

from keras.preprocessing.image import ImageDataGenerator datagen = ImageDataGenerator(horizontal flip=True) datagen.fit(train)

Early stopping

Early stopping is a kind of cross-validation strategy where we keep one part of the training set as the validation set. When we see that the performance on the validation set is getting worse, we immediately stop the training on the model. This is known as early stopping.

In keras, we can apply early stopping using the callbacks function. Below is the sample code for it.

from keras.callbacks import EarlyStopping

EarlyStopping(monitor=

'val_err'

, patience=5

)

Here, monitor denotes the quantity that needs to be monitored and ‘val_err’ denotes the validation error.

Patience denotes the number of epochs with no further improvement after which the training will be stopped. For better understanding, let’s take a look at the above image again. After the dotted line, each epoch will result in a higher value of validation error. Therefore, 5 epochs after the dotted line (since our patience is equal to 5), our model will stop because no further improvement is seen.

Note: It may be possible that after 5 epochs (this is the value defined for patience in general), the model starts improving again and the validation error starts decreasing as well. Therefore, we need to take extra care while tuning this hyperparameter.

A case study on MNIST data with keras

By this point, you should have a theoretical understanding of the different techniques we have gone through. We will now apply this knowledge to our deep learning practice problem – Identify the digits. Once you have downloaded the dataset, start following the below code! First, we’ll import some of the basic libraries.

%pylab inline import numpy as np import pandas as pd from chúng tôi import imread from sklearn.metrics import accuracy_score from matplotlib import pyplot import tensorflow as tf import keras # To stop potential randomness seed = 128 rng = np.random.RandomState(seed)

Now, let’s load the dataset.

data_dir = os.path.join(root_dir, ‘data’) sub_dir = os.path.join(root_dir, ‘sub’)

## reading train file only train = pd.read_csv(os.path.join(data_dir, ‘Train’, ‘train.csv’)) train.head()

Take a look at some of our images now.

img_name = rng.choice(train.filename)

filepath = os.path.join(data_dir, 'Train', 'Images', 'train', img_name) img = imread(filepath, flatten=True) pylab.imshow(img, cmap='gray') pylab.axis('off') pylab.show() #storing images in numpy arrays temp = [] for img_name in train.filename: image_path = os.path.join(data_dir, 'Train', 'Images', 'train', img_name) img = imread(image_path, flatten=True) img = img.astype('float32') temp.append(img) x_train = np.stack(temp) x_train /= 255.0 x_train = x_train.reshape(-1, 784).astype('float32') y_train = keras.utils.np_utils.to_categorical(train.label.values)

Create a validation dataset, in order to optimize our model for better scores. We will go with a 70:30 train and validation dataset ratio.

split_size = int(x_train.shape[0]*0.7) x_train, x_test = x_train[:split_size], x_train[split_size:] y_train, y_test = y_train[:split_size], y_train[split_size:]

First, let’s start with building a simple neural network with 5 hidden layers, each having 500 nodes.

# import keras modules from keras.models import Sequential from keras.layers import Dense # define vars input_num_units = 784 hidden1_num_units = 500 hidden2_num_units = 500 hidden3_num_units = 500 hidden4_num_units = 500 hidden5_num_units = 500 output_num_units = 10 epochs = 10 batch_size = 128 model = Sequential([ Dense(output_dim=hidden1_num_units, input_dim=input_num_units, activation='relu'), Dense(output_dim=hidden2_num_units, input_dim=hidden1_num_units, activation='relu'), Dense(output_dim=hidden3_num_units, input_dim=hidden2_num_units, activation='relu'), Dense(output_dim=hidden4_num_units, input_dim=hidden3_num_units, activation='relu'), Dense(output_dim=hidden5_num_units, input_dim=hidden4_num_units, activation='relu'), Dense(output_dim=output_num_units, input_dim=hidden5_num_units, activation='softmax'), ])

Note that we are just running it for 10 epochs. Let’s quickly check the performance of our model.

trained_model_5d = model.fit(x_train, y_train, nb_epoch=epochs, batch_size=batch_size, validation_data=(x_test, y_test))

Now, let’s try the L2 regularizer over it and check whether it gives better results than a simple neural network model.

from keras import regularizers model = Sequential([ Dense(output_dim=hidden1_num_units, input_dim=input_num_units, activation='relu', kernel_regularizer=regularizers.l2(0.0001)), Dense(output_dim=hidden2_num_units, input_dim=hidden1_num_units, activation='relu', kernel_regularizer=regularizers.l2(0.0001)), Dense(output_dim=hidden3_num_units, input_dim=hidden2_num_units, activation='relu', kernel_regularizer=regularizers.l2(0.0001)), Dense(output_dim=hidden4_num_units, input_dim=hidden3_num_units, activation='relu', kernel_regularizer=regularizers.l2(0.0001)), Dense(output_dim=hidden5_num_units, input_dim=hidden4_num_units, activation='relu', kernel_regularizer=regularizers.l2(0.0001)), Dense(output_dim=output_num_units, input_dim=hidden5_num_units, activation='softmax'), ])

trained_model_5d = model.fit(x_train, y_train, nb_epoch=epochs, batch_size=batch_size, validation_data=(x_test, y_test))

Note that the value of lambda is equal to 0.0001. Great! We just obtained an accuracy which is greater than our previous NN model.

Now, let’s try the L1 regularization technique.

## l1 model = Sequential([ Dense(output_dim=hidden1_num_units, input_dim=input_num_units, activation='relu', kernel_regularizer=regularizers.l1(0.0001)), Dense(output_dim=hidden2_num_units, input_dim=hidden1_num_units, activation='relu', kernel_regularizer=regularizers.l1(0.0001)), Dense(output_dim=hidden3_num_units, input_dim=hidden2_num_units, activation='relu', kernel_regularizer=regularizers.l1(0.0001)), Dense(output_dim=hidden4_num_units, input_dim=hidden3_num_units, activation='relu', kernel_regularizer=regularizers.l1(0.0001)), Dense(output_dim=hidden5_num_units, input_dim=hidden4_num_units, activation='relu', kernel_regularizer=regularizers.l1(0.0001)), Dense(output_dim=output_num_units, input_dim=hidden5_num_units, activation='softmax'), ])

trained_model_5d = model.fit(x_train, y_train, nb_epoch=epochs, batch_size=batch_size, validation_data=(x_test, y_test))

This doesn’t show any improvement over the previous model. Let’s jump to the dropout technique.

## dropout from chúng tôi import Dropout model = Sequential([ Dense(output_dim=hidden1_num_units, input_dim=input_num_units, activation='relu'), Dropout(0.25), Dense(output_dim=hidden2_num_units, input_dim=hidden1_num_units, activation='relu'), Dropout(0.25), Dense(output_dim=hidden3_num_units, input_dim=hidden2_num_units, activation='relu'), Dropout(0.25), Dense(output_dim=hidden4_num_units, input_dim=hidden3_num_units, activation='relu'), Dropout(0.25), Dense(output_dim=hidden5_num_units, input_dim=hidden4_num_units, activation='relu'), Dropout(0.25), Dense(output_dim=output_num_units, input_dim=hidden5_num_units, activation='softmax'), ])

trained_model_5d = model.fit(x_train, y_train, nb_epoch=epochs, batch_size=batch_size, validation_data=(x_test, y_test))

Not bad! Dropout also gives us a little improvement over our simple NN model.

Now, let’s try data augmentation.

from keras.preprocessing.image import ImageDataGenerator datagen = ImageDataGenerator(zca_whitening=True) # loading data train = pd.read_csv(os.path.join(data_dir, 'Train', 'train.csv')) temp = [] for img_name in train.filename: image_path = os.path.join(data_dir, 'Train', 'Images', 'train', img_name) img = imread(image_path, flatten=True) img = img.astype('float32') temp.append(img) x_train = np.stack(temp) X_train = x_train.reshape(x_train.shape[0], 1, 28, 28) X_train = X_train.astype('float32')

Now, fit the training data in order to augment.

# fit parameters from data datagen.fit(X_train)

Here, I have used zca_whitening as the argument, which highlights the outline of each digit as shown in the image below.

## splitting y_train = keras.utils.np_utils.to_categorical(train.label.values) split_size = int(x_train.shape[0]*0.7) x_train, x_test = X_train[:split_size], X_train[split_size:] y_train, y_test = y_train[:split_size], y_train[split_size:] ## reshaping x_train=np.reshape(x_train,(x_train.shape[0],-1))/255 x_test=np.reshape(x_test,(x_test.shape[0],-1))/255 ## structure using dropout from chúng tôi import Dropout model = Sequential([ Dense(output_dim=hidden1_num_units, input_dim=input_num_units, activation='relu'), Dropout(0.25), Dense(output_dim=hidden2_num_units, input_dim=hidden1_num_units, activation='relu'), Dropout(0.25), Dense(output_dim=hidden3_num_units, input_dim=hidden2_num_units, activation='relu'), Dropout(0.25), Dense(output_dim=hidden4_num_units, input_dim=hidden3_num_units, activation='relu'), Dropout(0.25), Dense(output_dim=hidden5_num_units, input_dim=hidden4_num_units, activation='relu'), Dropout(0.25), Dense(output_dim=output_num_units, input_dim=hidden5_num_units, activation='softmax'), ])

trained_model_5d = model.fit(x_train, y_train, nb_epoch=epochs, batch_size=batch_size, validation_data=(x_test, y_test))

Wow! We got a big leap in the accuracy score. And the good thing is that it works every time. We just need to select a proper argument depending upon the images we have in our dataset.

Now, let’s try our final technique – early stopping.

from keras.callbacks import EarlyStopping

trained_model_5d = model.fit(x_train, y_train, nb_epoch=epochs, batch_size=batch_size, validation_data=(x_test, y_test) , callbacks = [EarlyStopping(monitor=’val_acc’, patience=2)])

You can see that our model stops after only 5 iterations as the validation accuracy was not improving. It gives good results in cases where we run it for a larger value of epochs. You can say that it’s a technique to optimize the value of the number of epochs.

Frequently Asked Questions

Q1. What is regularization in deep learning?

A. Regularization in deep learning is a technique used to prevent overfitting and improve the generalization of neural networks. It involves adding a regularization term to the loss function, which penalizes large weights or complex model architectures. Regularization methods such as L1 and L2 regularization, dropout, and batch normalization help control model complexity and improve its ability to generalize to unseen data.

Q2. What is L1 regularization and L2 regularization?

A. L1 regularization, also known as Lasso regularization, is a method in deep learning that adds the sum of absolute values of the weights to the loss function. It encourages sparsity by driving some weights to zero, resulting in feature selection. L2 regularization, also called Ridge regularization, adds the sum of squared weights to the loss function, promoting smaller but non-zero weights and preventing extreme values.

Q3. What is dropout in neural network?

A. Dropout is a regularization technique used in neural networks to prevent overfitting. During training, a random subset of neurons is “dropped out” by setting their outputs to zero with a certain probability. This forces the network to learn more robust and independent features, as it cannot rely on specific neurons. Dropout improves generalization and reduces the risk of overfitting.

End Notes

I hope that now you have an understanding of regularization and the different techniques  required to implement it in deep learning models. I highly recommend applying it whenever you are dealing with a deep learning task. It will help you expand your horizons and gain a better understanding of the topic.

Related

Build And Automate Machine Learning

Intel keeps on eating up new businesses to work out its machine learning and AI tasks.

In the most recent move, TechCrunch has discovered that the chip goliath has obtained chúng tôi an Israeli organization that has manufactured and works a platform for information scientists to assemble and run machine learning models, which can be utilized to prepare and follow various models and run examinations on them, construct proposals and the sky is the limit from there.

Intel affirmed the securing to us with a short note. “We can affirm that we have procured Cnvrg,” a representative said.

Also read: Top 10 Trending Technologies You should know about it for Future Days

Intel isn’t uncovering any monetary terms of the arrangement, nor who from the startup will join Intel.

Cnvrg, helped to establish by Yochay Ettun (CEO) and Leah Forkosh Kolben, had raised $8 million from financial specialists that incorporate Hanaco Venture Capital and Jerusalem Venture Partners, and PitchBook gauges that it was esteemed at around $17 million in its last round.

It was just seven days back that Intel made another procurement to help its AI business, additionally in the region of machine getting the hang of demonstrating: it got SigOpt, which had built up a streamlining platform to run machine picking up displaying and reenactments.

Also read: Top 10 Programming Languages for Kids to learn

Cnvrg.io’s platform works across on-reason, cloud and half and half conditions and it comes in paid and complementary plans (we covered the dispatch of the free help, marked Core, a year ago).

It contends with any semblance of Databricks, Sagemaker and Dataiku, just as more modest activities like chúng tôi that are based on open-source structures.

Cnvrg’s reason is that it gives an easy to use platform to information scientists so they can focus on formulating calculations and estimating how they work, not fabricating or keeping up the platform they run on.

While Intel isn’t saying much regarding the arrangement, it appears to be that a portion of a similar rationale behind a week ago’s SigOpt procurement applies here also: Intel has been pulling together its business around cutting edge chips to more readily contend with any semblance of Nvidia and more modest players like GraphCore.

So it bodes well to likewise give/put resources into AI apparatuses for clients, explicitly administrations to help with the process stacks that they will be running on those chips.

It’s prominent that in our article about the Core complementary plan a year ago, Frederic noticed that those utilizing the platform in the cloud can do as such with Nvidia-improved holders that sudden spike in demand for a Kubernetes bunch.

Also read: Best 12 Vocabulary Building Apps for Adults 2023?

Intel’s attention on the up and coming age of figuring expects to balance decreases in its inheritance activities. In the last quarter, Intel announced a 3% decrease in its incomes, driven by a drop in its server farm business.

It said that it’s anticipating the AI silicon market to be greater than $25 billion by 2024, with AI silicon in the server farm to be more prominent than $10 billion in that period.

Understand Machine Learning And Its End

This article was published as a part of the Data Science Blogathon.

What is Machine Learning?

Machine Learning: Machine Learning (ML) is a highly iterative process and ML models are learned from past experiences and also to analyze the historical data. On top, ML models are able to identify the patterns in order to make predictions about the future of the given dataset.

Why is Machine Learning Important?

Since 5V’s are dominating the current digital world (Volume, Variety, Variation Visibility, and Value), so most of the industries are developing various models for analyzing their presence and opportunities in the market, based on this outcome they are delivering the best products, services to their customers on vast scales.

What are the major Machine Learning applications?

Machine learning (ML) is widely applicable in many industries and its processes implementation and improvements. Currently, ML has been used in multiple fields and industries with no boundaries. The figure below represents the area where ML is playing a vital role.

Where is Machine Learning in the AI space?

Just have a look at the Venn Diagram, we could understand where the ML in the AI space and how it is related to other AI components.

As we know the Jargons flying around us, let’s quickly look at what exactly each component talks about.

How Data Science and ML are related?

Machine Learning Process, is the first step in ML process to take the data from multiple sources and followed by a fine-tuned process of data, this data would be the feed for ML algorithms based on the problem statement, like predictive, classification and other models which are available in the space of ML world. Let us discuss each process one by one here.

Machine Learning – Stages: We can split ML process stages into 5 as below mentioned in the flow diagram.

Collection of Data

Data Wrangling

Model Building

Model Evaluation

Model Deployment

Identifying the Business Problems, before we go to the above stages. So, we must be clear about the objective of the purpose of ML implementation. To find the solution for the given/identified problem. we must collect the data and follow up the below stages appropriately.

Data collection from different sources could be internal and/or external to satisfy the business requirements/problems. Data could be in any format. CSV, XML.JSON, etc., here Big Data is playing a vital role to make sure the right data is in the expected format and structure.

Data Wrangling and Data Processing: The main objective of this stage and focus are as below.

Data Processing (EDA):

Understanding the given dataset and helping clean up the given dataset.

It gives you a better understanding of the features and the relationships between them

Extracting essential variables and leaving behind/removing non-essential variables.

Handling Missing values or human error.

Identifying outliers.

The EDA process would be maximizing insights of a dataset.

Handling missing values in the variables

Convert categorical into numerical since most algorithms need numerical features.

Need to correct not Gaussian(normal). linear models assume the variables have Gaussian distribution.

Finding Outliers are present in the data, so we either truncate the data above a threshold or transform the data using log transformation.

Scale the features. This is required to give equal importance to all the features, and not more to the one whose value is larger.

Feature engineering is an expensive and time-consuming process.

Feature engineering can be a manual process, it can be automated

Training and Testing:

the efficiency of the algorithm which is used to train the machine.

Test data is used to see how well the machine can predict new answers based on its training.

used to train the model.

Training

Training data is the data set on which you train the model.

Train data from which the model has learned the experiences.

Training sets are used to fit and tune your models.

Testing

learnt good enough from the experiences it got in the train data set.

are “unseen” data to evaluate your models.

Test data: After the training the model, test data is used to test its efficiency and performance of the model

The purpose of the random state in train test split: Random state ensures that the splits that you generate are reproducible. The random state that you provide is used as a seed to the random number generator. This ensures that the random numbers are generated in the same order.

Data Split into Training/Testing Set

We used to split a dataset into training data and test data in the machine learning space.

The split range is usually 20%-80% between testing and training stages from the given data set.

A major amount of data would be spent on to train your model

The rest of the amount can be spent to evaluate your test model.

But you cannot mix/reuse the same data for both Train and Test purposes

If you evaluate your model on the same data you used to train it, your model could be very overfitted. Then there is a question of whether models can predict new data.

Therefore, you should have separate training and test subsets of your dataset.

MODEL EVALUATION: Each model has its own model evaluation mythology, some of the best evaluations are here.  

Evaluating the Regression Model.

Sum of Squared Error (SSE)

Mean Squared Error (MSE)

Root Mean Squared Error (RMSE)

Mean Absolute Error (MAE)

Coefficient of Determination (R2)

Adjusted R2

Evaluating Classification Model.

Confusion Matrix.

Accuracy Score.

Deployment of an ML-model simply means the integration of the finalized model into a production environment and getting results to make business decisions.

So, Hope you are able to understand the Machine Learning end-to-end process flow and I believe it would be useful for you, Thanks for your time.

Related

Update the detailed information about Choose Best Python Compiler For Your Machine Learning Project – Detailed Overview on the Bellydancehcm.com website. We hope the article's content will meet your needs, and we will regularly update the information to provide you with the fastest and most accurate information. Have a great day!