nonessential / shelter in place designs / April 2020
Cloning a private Github repo on EC2
Here are the steps to get your code from a private GitHub repo on to a EC2 instance:
1. SSH into the EC2 instance, then create SSH keys using the following commands:
$ ls -al ~/.ssh $ ssh-keygen -t rsa -C “firstname.lastname@example.org” $ eval “$(ssh-agent -s)” $ ssh-add ~/.ssh/id_rsa $ cat ~/.ssh/id_rsa.pub
2. Add the key to the Github repository.
Follow these steps: Adding a new SSH key to your Github account - Github Help
3. On the EC2 machine, verify SSH access works:
$ ssh -T email@example.com
4. Clone your repo
git clone firstname.lastname@example.org:blakeanderson/blake-anderson.git
Installing Swift 5 on Ubuntu
Install the clang compiler
$ sudo apt-get install clang
Install required libs
$ sudo apt-get install libcurl3 libpython2.7 libpython2.7-dev
Download your version of swift: Swift.org - Download Swift - this downloads a tar ball to your current directory
$ wget https://swift.org/builds/swift-5.1.3-release/ubuntu1604/swift-5.1.3-RELEASE/swift-5.1.3-RELEASE-ubuntu16.04.tar.gz
Import the PGP keys into your keyring
$ wget -q -O - https://swift.org/keys/all-keys.asc | gpg —import -
Verify the PGP signature (if this fails: Swift.org - Download Swift)
$ gpg —keyserver hkp://pool.sks-keyservers.net --refresh-keys Swift $ gpg —verify swift-5.1.3-RELEASE-ubuntu16.04.tar.gz.sig
Unzip the archive
$ tar xzf swift-5.1.3-RELEASE-ubuntu16.04.tar.gz
Move the extracted files
$ sudo mv swift-5.1.3-RELEASE-ubuntu16.04 /usr/share/swift
Configure the systems PATH env variable
$ echo “export PATH=/usr/share/swift/usr/bin:$PATH" >> ~/.bashrc $ source ~/.bashrc
Check the current swift version
$ swift —version
Back to Blogging (Vapor Edition)
I recently decided to move my blog off of Tumblr and write my own CMS. I wanted to try out some new technologies, so I decided to build the API and front end web with Vapor.
Vapor is written in Swift, a programming language I enjoy using. One of the most promising aspects of Swift is that it’s able to scale from low-level systems programming all the way up to high-level scripting, and thus be applicable to a wide variety of problem domains. Using Swift for API and web development seemed like a great idea.
Separate from the familiarity of Swift, there were many other reasons to try it out:
- Vapor is developed by the community, and thus, has huge support. The discord channel is always active and helpful.
- It works on Ubuntu, making deployment to EC2 a possibility.
- It uses a non-blocking, event-driven architecture built on top of Apple’s SwiftNIO , so it’s incredibly performant.
The vapor app is broken down into 3 parts:
- API - using token authentication and Postgres for persistence. There’s also a small Tumblr importer to seed the database with my Tumblr posts.
- Public Web - the external site; Leaf templates and hand written css/html for the design
- Admin Web - the internal site; uses Leaf templates and Materialize.js for the design
I decided not to couple the API and public web, even though it would have been faster and easier to develop. Keeping the web separate allowed for easy testing of the API and for a future integration with an iOS app.
Both web modules, public and admin, use Leaf templates. The templates use Swift-inspired syntax to generate dynamic HTML pages and fit naturally with Vapor. They worked well for this project, but I would use a different web framework if I wast to build something larger. Also, Xcode does not handle HTML/Leaf well, so I had to switch between VS Code and Xcode to develop the front ends.
For hosting, I chose to go with AWS services. I’m running the app on a micro EC2 instance, using S3 for image hosting, and RDS for database management.
Vapor has been a joy to work with. There really is a compelling case to be made for building a full stack product with Swift. I’m excited to see where it goes from here. I’ll be open sourcing this blog project as soon as I get it polished up.
Udacity Data Analysis Nanodegree Review
I’ve been really interested in improving my data skills, so when I came across the Data Analyst Nanodegree program from Udacity in December I thought I’d give it a shot. I really had fun with it. Here’s my recap of the 6 month (Term 1& Term 2) Data Analyst Nanodegree course:
Term 1 is all about learning Python and it’s powerful data libraries, pandas and MatPlotLib. There’s also some SQL sprinkled in and a nice dose of statistics, but the focus is on the data analysis process with Python. This term was broken down into three parts: an introduction to Python, Data Analysis, and Statistics. The first section, an introduction to python, consisted of learning the basics of Python using Jupiter notebooks. All of the essentials were covered: strings, functions, modules, arithmetic.If you’re familiar with computer programming you’ll breeze through this section. The project was a simple data analysis on bike sharing using Python. Most of the project implementation details were scaffolded out, so it’s left up to the student to fill in the details. Overall, it was a niceintroduction to start. The second section was focused on learning the basics of SQL and investigating datasets using Python. I had a pretty good grasp on SQL before this course, so I didn’t spend very much time on this portion. But I should mention they did a great job covering all of the necessary commands and clearly explained some of the trickier bits, like joins and window functions. This section also went in depth on how to analyze data using Python. Understanding how to clean data with Pandas and plot it with MatPlotLib is the cornerstone of this nanodegree program, nearly everything in Term 2 is built off the foundational knowledge learned here, so don’t skip it if you’re going to be moving on. The project in this section was one of my favorites of the course: aninvestigation into datasets of your choosing. I chose to look at the effects of government health care spending on life expectancy and colon cancer. The last section covered statistics. In it, we covered all the good stuff: regressions, confidence intervals, hypothesis testing, bootstrapping, and Bayes. I found the content to be well taught and thorough, but my biggest complaint about this section was it’s format and pacing. The format of some of the videos and quizzes were noticeably different from everything else. Videos were very, very short which made it more difficult to let the topic flow, and many of the quizzes were unnecessary in the early setup of the problems. Even with its flaws, this was unquestionably one of the more useful sections of the nanodegree.
Term 2 consisted of Exploratory Data Analysis using R,Data Wrangling, and Data Story Telling using Tableau. Upon completion of these three sections, you’re awarded the coveted nanodegree Exploratory Data Analysis using R was the first section up in Term 2. It started off by going through the fundamentals of R, which aren’t that different from a programming language like Python, and learning about R Studio, the environment for development. This was my first exposure to R, and I thought it was fairly easy to pick up. There are a lot of similarities in syntax when analyzing/cleaning data between R and Python+Pandas.I found the main advantage of using R was the graphing tools. I was particularly impressed by the simplicity and power of
ggplot. I found this section interesting, but not incredibly useful as it was essentially a repeat of section 2 of Term 1 except it used a different programming language (onethat is declining in usage). The second section of Term 2 focused on Data Wrangling/Cleaning. Nearly all of this was covered in section 2 of Term 1. The project required gathering, assessing, and cleaning a user’s twitter dataset. Really, nothing new here. The last section of Term 2 was about Data Story Telling by using Tableau. I was really excited about this one but ended up pretty disappointed. I’d heard about Tableau many times when reading or discussing data analysis but had never had the chance to check it out. Its drag-and-drop interface was extremely easy to use and I really think this application could open up data analysis to more people, but I found this to be my biggest problem with it: this taught an application user interface instead of continuing, or reinforcing, data analysis knowledge that goes beyond an easy-to-use interface. I personally would have loved to see this section replaced by a more difficult SQL section or more advanced plotting with Python.
- Udacity does provide nanodegree students with a“mentor.” I didn’t have any real need to use it, but I appreciated knowing that if I did get stuck there was someone I could talk to directly.
- Term 1 would be really difficult for anyone that hasn’t done any computer programming. I’d highly recommend a foundations course before starting the nanodegree if you haven’t programmed before.
- The timelines and due dates were generous. I rarely felt rushed and was able to accomplish nearly all of the course just on weekends.
- Jupyter notebooks are incredible. This was my first time using them and dang do they make learning programming SO nice.
My final advice:
It’s worth it. Term 1 has better material, Term 2 has the certificate. If you’re exclusively looking to improve your data analysis skills, then only taking term 1 is sufficient. If you’re looking to improve your resume and go into the Data Analysis profession, then it’s hard to turn down Term2 and the nanodegree certificate.
ClassDojo Release Notes / SF / March 2016
When I was sent these, I had no words, just lots of🔥 emojis.
(Read these to the tune of Fresh Prince)