LSTM methodology, while introduced in the late 90’s, has only recently become a viable and powerful forecasting technique. Classical forecasting methods like ARIMA and HWES are still popular and powerful but they lack the overall generalizability that memory-based models like LSTM offer.
The main objective of this article is to lead you through building a working LSTM model. However, while this article’s goal isn’t necessarily to compare new and classic modeling techniques, I will discuss some advantages and disadvantages of classical vs. RNN-based techniques in the conclusion.
The full code is provided below. Given you have the dataset and the…
As data scientists, our jobs are to deliver tangible, bottom-line results to the business. While I’d love to train neural networks all day, it’s critical that we build solid relationships with our business units and find ways to deliver easily understandable and quantifiable value.
Unless you’re working for a big tech company, chances are that your team has plenty of use cases for analytics. In this short tutorial, I’m going to give you the code and overview of how you can leverage basic NLP (Natural Language Processing) to deliver real, communicable, valuable analytics.
For tutorials on gathering requisite text data…
Using company APIs is a great way to automate data extraction. While it is possible to scrape websites for data, (see my articles on web scraping) APIs offer a simpler, and often more legal, way to get data at-scale.
The Complete Basics — What is an API?
An API stands for “Application Programming Interface,” which in (very) simple terms, is a piece of software that allows two applications to communicate. App 1 will send requests to the App 2, and App 2 will return results. Different APIs will have different restrictions on access to data.
If you are a complete beginner, I strongly urge you to check out part 1 of this tutorial first. The beginning of this tutorial will be geared toward beginners, so I will be covering Python class and method basics, but it is only going to cover the bare basics. You can find the TL;DR right below it.
Why not reuse the code from part 1?
As you may have noticed, there is a lot of manual input in the previous process. We would need to copy-paste a lot of code into a new Python file or retype it all. …
The whole brand question has been on my mind for a while now. Whenever people ask me for data science career advice, I almost reflexively tell them to build an online presence. I’ll be honest, I feel the jury is kind of out on the magnitude of impact an online presence has, but I feel it’s definitely worth trying.
The way I think about building a brand is the way I think about cover letters. Will it propel you past the competition? Probably not. But it will give you a slight advantage over some people. …
I noticed there aren’t too many resources on web crawling that are geared toward total beginners, so I decided to make one. Part 1 will be for complete beginners then we’ll take a more object oriented approach to it in Part 2 (which will be a separate article).
The Complete Basics — What is Web Crawling?
Web crawling is writing a program to navigate an internet browser. An automated browser (also called a headless browser) can be used to do things a human could do, just faster and better. A couple applications:
Data scientist that’s a bit obsessed with the economics of healthcare