Please enable JavaScript to view the comments powered by Disqus. DevOps for Data Scientists: Taming the Unicorn

 

 

 

DevOps for Data Scientists: Taming the Unicorn

NovelVista

NovelVista

Last updated 22/07/2021


DevOps for Data Scientists: Taming the Unicorn

When most data scientists start working, they are equipped with all the neat math concepts they learned from school textbooks.

However, pretty soon, they realize that the majority of data science work involves getting data into the format needed for the model to use. Even beyond that, the model being developed is part of an application for the end-user.

Now a proper thing a data scientist would do is have their model codes version controlled on Git. VSTS would then download the codes from Git. VSTS would then be wrapped in a Docker Image, which would then be put on a Docker container registry. Once on the registry, it would be orchestrated using Kubernetes.

Now, say all that to the average data scientist and his mind will completely shut down. Most data scientists know how to provide a static report or CSV file with predictions.

However, how do we version control the model and add it to an app?

How will people interact with our website based on the outcome? How will it scale!? All this would involve confidence testing, checking if nothing is below a set threshold, sign off from different parties and orchestration between different cloud servers (with all its ugly firewall rules). This is where some basic DevOps knowledge would come in handy.

What is DevOps?

Long story short, DevOps are the people who help the developers (e.g. data scientists) and IT work together.

Developers have their own chain of command (i.e. project managers) who want to get features out for their products as soon as possible. For data scientists, this would mean changing model structure and variables. They couldn’t care less what happens to the machinery. Smoke coming out of a data center? As long as they get their data to finish the end product, they couldn’t care less.

On the other end of the spectrum is IT. Their job is to ensure that all the servers, networks and pretty firewall rules are maintained. Cybersecurity is also a huge concern for them. They couldn’t care less about the company’s clients, as long as the machines are working perfectly. DevOps is the middleman between developers and IT. Some common DevOps functionalities involve:

  • Integration
  • Testing
  • Packaging
  • Deployment

The rest of the blog will explain the entire Continuous Integration and Deployment process in detail (or atleast what is relevant to a Data Scientist). An important note before reading the rest of the blog. Understand the business problem and do not get married to the tools.

The tools mentioned in the blog will change, but the underlying problem will remain roughly the same (for the foreseeable future at least).

Source Control

Imagine pushing your code to production. And it works! Perfect. No complaints. Time goes on and you keep adding new features and keep developing it. However, one of these features introduces a bug to your code that badly messes up your production application. You were hoping one of your many unit tests may have caught it.

However, just because something passed all your tests doesn’t mean it’s bug-free. It just means it passed all the tests currently written. Since it’s a production-level code, you do not have time to debug. Time is money and you have angry clients.

Wouldn’t it all be simple to revert back to a point when your code worked???

That’s where version control comes in. In Agile style code development, the product keeps developing in bits and pieces over an indefinite time period. For such applications, some form of version control would be really useful.

Personally I like Git but SVN users still exist. Git works on all forms of platforms like GitHub, GitLab, and BitBucket (each with its own unique set of pros and cons).

If you are already familiar with Git, consider taking a more Advanced Git Tutorial On Atlassian. An advanced feature I recommend looking up is Git Submodules, where you can store specific commit hashes of multiple independent Git repositories to ensure that you have access to a single set of stable dependencies.

It is also important to have a README.MD, outlining the details of the repository as well as packaging (e.g. using setup.py for Python) when necessary. If you are storing binary files, consider looking into Git LFS (though I recommend avoiding this if possible).

A data science specific problem with version control is the use of Jupiter/Zeppelin notebooks.

Data scientists absolutely LOVE notebooks. However, if you store your codes on a notebook template and try to change the code in version control, you will be left with insane HTML junk when performing diff and merge.

You can either completely abandon the use of notebooks in version control (and simply import the math functions from the version-controlled libraries) or you can use existing tools like nbdime.

Automatic Testing

From a data scientist’s perspective, testing usually fall into one of two camps. You have the usual unit testing which checks if the code is working properly or if the code does what you want it to do. The other one, being more specific to the domain of data science,

Is data quality checks and model performance. Does your model produce for you an accurate score?

Now, I am sure many of you are wondering why that’s an issue. You have already done the classification score and ROC curves and the model is satisfactory enough for deployment.

Well, lot’s of issues. The primary issue is that the library versions of the development environment may be completely different from production. This would mean different implementation, approximations, and hence, different model outputs.

Another classic example is the use of different languages for development and production.

Let’s imagine this scenario. You, the noble data scientist, wish to write a model in R, Python, Matlab, or one of the many new languages whose white paper just came out last week (and may not be well tested).

You take your model to the production team. The production team looks at you skeptically, laughs for 5 seconds, only to realize that you are being serious. Scoff they shall. The production code is written in Java.

This means re-writing the entire model code to Java for production. This, again, would mean a completely different input format and model output. Hence why automated testing is required.

Unit tests are very common. JUnit is available for Java users and the unnittest library for Python developers. However, it is possible for someone to forget to properly run the unit tests on the team before pushing codes into production.

While you can use crontab to run automated tests, I would recommend using something more professional like Travis CI, CircleCI or Jenkins.

Jenkins allow you to schedule tests, cherry-pick specific branches from a version control repository, get emailed if something breaks, and even spin Docker container images if you wish to sandbox your tests.

Containerization based sand-boxing will be explained in more detail in the next section.

Containerization

Sand-boxing is an essential part of coding. This might involve having different environments for various applications.

It could simply be replicating the production environment into development. It could even mean having multiple production environments with different software versions in order to cater a much larger costumer base. If the best you have in mind is using a VM with Virtual Box, I am sure you have noticed that you either need to use the exact same VM for multiple rounds of tests (terrible DevOps hygiene) or re-create a clean VM for every test (which may take close to an hour, depending on your needs).

A simpler alternative is using a container instead of a full on VM. A container is simply a unix process or thread that looks, smells and feels like a VM. The advantage is that it is low powered and less memory intensive (meaning you can spin it up or take it down at will… within minutes). Popular containerization technologies include Docker (if you wish to use just 1 container) or Kubernetes (if you fancy orchestrating multiple containers for a multi-server workflow).

 

Containerization technologies help, not only with tests, but also scalability. This is especially true when you need to think about multiple users using your model-based application. This may either be true in terms of training or prediction.

Security

Security is important but often underestimated in the field of data science. Some of the data used for model training and prediction involves sensitive data such as credit card information or healthcare data.

Several compliance policies such as GDPR and HIPAA needs to be addressed when dealing with such data. It is not only the client that needs security. Trade secret model structure and variables, when deployed them on client servers, require a certain level of encryption.

This is often solved by deploying the model in encrypted executables (e.g. JAR files) or by encrypting model variables before storing them on the client database (although, please DO NOT write your own encryption unless you absolutely know what you are doing…).

Also, it would be wise to build models on a tenant-by-tenant basis in order to avoid accidental transfer learning that might cause information leaks from one company to another. In the case of enterprise search, it would be possible for data scientists to build models using all the data available and, based on permission settings, filter out the results a specific user is not authorized to see.

While the approach may seem sound, part of the information available in the data used to train the model is actually learned by the algorithm and transferred to the model. So, either way, that makes it possible for the user to infer the content of the forbidden pages.

There is no such thing as perfect security. However, it needs to be good enough (the definition of which depends on the product itself).

Collaboration

When working with DevOps or IT, as a data scientist, it is important to be upfront about requirements and expectations. This may include programming languages, package versions, or frameworks.

Last but not the least, it is also important to show respect to one another. After all, both DevOps and Data Scientists have incredibly hard challenges to solve. DevOps do not know much about data science and Data Scientists are not experts in DevOps and IT.

Hence, communication is key to a successful business outcome.

So if you are working in Data science or DevOps then you must learn both skills to be stand out from others then you dont leave this hanging enroll yourself for this courses like DevOps Foundation and Data Science Professional

Topic Related Post

Securing the Pipeline: Integrating Security into Your SRE Practices
Ready for the Next Level? Top DevSecOps Skills to Master Before 2025
SRE in FinTech: Challenges and Opportunities

About Author

NovelVista Learning Solutions is a professionally managed training organization with specialization in certification courses. The core management team consists of highly qualified professionals with vast industry experience. NovelVista is an Accredited Training Organization (ATO) to conduct all levels of ITIL Courses. We also conduct training on DevOps, AWS Solution Architect associate, Prince2, MSP, CSM, Cloud Computing, Apache Hadoop, Six Sigma, ISO 20000/27000 & Agile Methodologies.

 
 

SUBMIT ENQUIRY

* Your personal details are for internal use only and will remain confidential.

 
 
 
 
 
 

Upcoming Events

ITIL-Logo-BL
ITIL

Every Weekend

AWS-Logo-BL
AWS

Every Weekend

Dev-Ops-Logo-BL
DevOps

Every Weekend

Prince2-Logo-BL
PRINCE2

Every Weekend

Topic Related

Take Simple Quiz and Get Discount Upto 50%

Popular Certifications

AWS Solution Architect Associates
SIAM Professional Training & Certification
ITILŽ 4 Foundation Certification
DevOps Foundation By DOI
Certified DevOps Developer
PRINCE2Ž Foundation & Practitioner
ITILŽ 4 Managing Professional Course
Certified DevOps Engineer
DevOps Practitioner + Agile Scrum Master
ISO Lead Auditor Combo Certification
Microsoft Azure Administrator AZ-104
Digital Transformation Officer
Certified Full Stack Data Scientist
Microsoft Azure DevOps Engineer
OCM Foundation
SRE Practitioner
Professional Scrum Product Owner II (PSPO II) Certification
Certified Associate in Project Management (CAPM)
Practitioner Certified In Business Analysis
Certified Blockchain Professional Program
Certified Cyber Security Foundation
Post Graduate Program in Project Management
Certified Data Science Professional
Certified PMO Professional
AWS Certified Cloud Practitioner (CLF-C01)
Certified Scrum Product Owners
Professional Scrum Product Owner-II
Professional Scrum Product Owner (PSPO) Training-I
GSDC Agile Scrum Master
ITILŽ 4 Certification Scheme
Agile Project Management
FinOps Certified Practitioner certification
ITSM Foundation: ISO/IEC 20000:2011
Certified Design Thinking Professional
Certified Data Science Professional Certification
Generative AI Certification
Generative AI in Software Development
Generative AI in Business
Generative AI in Cybersecurity
Generative AI for HR and L&D
Generative AI in Finance and Banking
Generative AI in Marketing
Generative AI in Retail
Generative AI in Risk & Compliance
ISO 27001 Certification & Training in the Philippines
Generative AI in Project Management
Prompt Engineering Certification
SRE Certification Course
Devsecops Practitioner Certification
AIOPS Foundation Certification
ISO 9001:2015 Lead Auditor Training and Certification
ITIL4 Specialist Monitor Support and Fulfil Certification
SRE Foundation and Practitioner Combo
Generative AI webinar
Leadership Excellence Webinar
Certificate Of Global Leadership Excellence
SRE Webinar
ISO 27701 Lead Auditor Certification