changes which pose a risk of corrupting the database
The framework we’re using has an Object Relational Model which means things in the database look and feel like class objects in the code. So changes there can be unit tested before even hitting the production or any database at that. We’re also able to introduce mock tests which supply a temporary in ram database filled with test data that we run automated test units on to verify we’re working as expected.
We’re also moving more to a 12 Factor application design model with Behavior Driven Development so this would be less than likely to be an issue at all but still an item that needs to be approved via the Pull Requests and change approval process.
yes I’m aware of how much that sounds like red tape and extra steps but its not. Just means developers fork the code then submit pull requests to github which have unit tests written first then code to make the unit tests work. When the pull request is submitted the cicd (travis-ci) runs all the tests. If it fails then nothing goes upstream from there. if it passes then we tag the master branch for release using semantic versioning (ie v1.0.0) and the new release gets push into the production cluster.
different levels of scrutiny?
This breaks down like this:
Committee chairs and other stakeholders attend the Development Priority Meeting to select items for sprint & groom the backlog
developer picks available tickets selected for the sprint
developer creates unit tests
developer writes only enough code to make unit tests pass
developer commits code and submits a pull request to the development branch on github
travis-ci test code on pull request and emails development team (also shows up as an alert bell on github)
review process of code begins, if acceptable pull request is accepted
docker hub builds image for “latest” release
community grid auto deploys “latest” release to development systems
developer team then can run end to end tests and acceptance tests
with passing all the above the “latest” gets blessed with a semver tag
docker hub builds the version tagged release
developer team logs into the community grid and bumps up the production tagged version to the new release
community grid deploys the the tagged version into production.
From a developer’s POV:
Write tests, run tests, write lease amount of code needed
run tests again and do a pull request
Sip coffee/beer and have fun making things for DMS
From DMS’s POV:
Only needed predefined changes are picked and available for work
All the things are tested and bugs squashed before it hits the servers / membership
Able to track issues to which release and able to roll back to a working known working state with a push of a button
Sip coffee/beer knowing things get done and members are happy
How does this help us with testing the RSS feature?
Write the unit tests with mock data and make sure the rss is rendered correctly.
Deploy to the preprod cluster and point things at it.
How can we verify what is on production is what is in master?
The Audit log. Semantic Versioning of docker images and releases in github.
Touch a littlle on this one with my reply to bill but it works like this:
github code in master gets tagged as 1.0.0
docker image in docker hub gets the same tag.
docker image tagged ‘latest’ is the same code in master.
tag code in github as 1.0.2 and it goes down the line to docker hub as tagged 1.0.2
travis-ci follows this same process as well under the build history log.
we’re also able to tag logs in the community grid based on both container and image names. (e.g. container: preprod/calendar image: dallasmakerspace/calendar:1.0.0 msg: << HTTP ERROR LOG >>
)
Store config in the environment
An app’s config is everything that is likely to vary between deploys (staging, production, developer environments, etc). This includes:
Resource handles to the database, Memcached, and other backing services
Credentials to external services such as Amazon S3 or Twitter
Per-deploy values such as the canonical hostname for the deploy
We’re already doing some of that in the source There’s a few areas that need to be touched. and its applied with the docker-compose.
For static content that should either be in aws s3 or the code repo’s webroot directory.
We will have to teach a class on it, you do realize that
Yes! we do and that’s one of the classes I’ve been wanting to get out the door sooner than later. Its a main staple class too, one that everyone I talk to when doing avocation and recruiting wants to take.
I have a few VM’s out in aws that need to be joined into a cluster and green code deployed. I’d consider those UAT environments until we actually show stablity in the system.
Ok, but this doesn’t show what is on prod currently … we need a diff between master and prod before we update too much
I found an error in master that would make the software not work right. Since it is working on prod, it can’t be the same.