Something never seen on the Events Calendar - Live Now

My problem is I have maybe two hours a week of DMS development time, and DMS has all of three active, willing developers that want to work on DMS projects. Let’s just keep moving, keep going. Forget archaic processes and docker test environments, do it live!

Lets do this:

  1. Create a pull request
  2. I will load it locally and test it for glaring issues / errors
  3. I will review the code in its entirety
  4. Push to production
  5. Community reviews for any additional errors

That way we keep moving, software keeps happening, and we don’t have to wait for new processes and tasks to happen.

3 Likes

And we end up with bad bugs later that don’t get found until the accounting is way off or something just as bad…
There is a reason for testing things and going through some sort of procedure for getting a patch into production. If someone doesn’t look through the code and do some testing we will have alot of broken code in our production websites. If not not, later. I understand the drive to keep things moving. I would very much to keep things moving but I think safety and sanity is an issue here. There needs to be a place to try out and display for others potential ideas and refine them in a similar environment to production. This way people can give feedback before they are forced to use it.

1 Like

This feature covers a read-only RSS feed…

1 Like

This feature branch also includes other patches.

1 Like

As I have said repeatedly… I know what is in master is not what is on production because I fixed some bugs that would keep it from running properly. You have voiced concerns that there might be other places that creds that are in the code. We don’t know if this will work with production data because we don’t know what is on production.

1 Like

But if you want to push the changes … do what you think is best.
I’m standing back from it for now until we have some way of testing.

I proposed a shared hosting solution a while back… easy to implement

However, the RSS feed is a special case that this wouldn’t work because many of the consumers are services on the internet.

2 Likes

Alright, so you work with @denzuko directly to get a great testing infrastructure online. The second y’all have it ready and I will switch right over to whatever quality checks y’all want.

I’m not going to touch your branch, you’ll have to submit a pull request if you would like it merged. If you would like to wait until you guys have what you want setup, that’s fine.

2 Likes

testing infrastructure online

So to be clear we already have testing infrastructure along with deployment pipelines. The testing suite is phpunit (comes backed in with the frameworks we’re using.) the Continuous testing environment is Travis-ci and automated security scanning is done with snyk.io.

Continuous deployment is baked into the community grid platform and deploys images the Dallas Makerspace Docker hub

As for selenium testing that’s something we may look into later.

We’re literally at the point of going over the build log and fixing those bugs reported. From what I’m seeing about 90% of it is just stupid formatting / convention bugs that slipped in from key board cowboying features. Another 5% is just code quality stuff like documentation. The rest are code coverage and things like:

FILE: ...is/build/Dallas-Makerspace/calendar/src/Model/Table/RegistrationsTable.php
...
 160 | ERROR | [x] Expected "int" but found "integer" for parameter type
 162 | ERROR | [ ] Expected "bool" but found "boolean" for function return type
1 Like

How does this help us with testing the RSS feature?

1 Like

How can we verify what is on production is what is in master?

1 Like

Might it be helpful to have different levels of scrutiny?

The strictest would be for changes which pose a risk of corrupting the database.

Medium is for “read-only” features, but those which are essential for the system to work, like displaying time-critical information.

Least for report generation systems in which unavailability can be tolerated.

1 Like

If a bug gets in the report that the accountant uses for example, it could be a higher priority

1 Like

That can be classified as corrupting the database.

I was thinking more in terms of reports for statistical analysis.

1 Like

changes which pose a risk of corrupting the database

The framework we’re using has an Object Relational Model which means things in the database look and feel like class objects in the code. So changes there can be unit tested before even hitting the production or any database at that. We’re also able to introduce mock tests which supply a temporary in ram database filled with test data that we run automated test units on to verify we’re working as expected.

We’re also moving more to a 12 Factor application design model with Behavior Driven Development so this would be less than likely to be an issue at all but still an item that needs to be approved via the Pull Requests and change approval process.

yes I’m aware of how much that sounds like red tape and extra steps but its not. Just means developers fork the code then submit pull requests to github which have unit tests written first then code to make the unit tests work. When the pull request is submitted the cicd (travis-ci) runs all the tests. If it fails then nothing goes upstream from there. if it passes then we tag the master branch for release using semantic versioning (ie v1.0.0) and the new release gets push into the production cluster.

different levels of scrutiny?

This breaks down like this:

  • Committee chairs and other stakeholders attend the Development Priority Meeting to select items for sprint & groom the backlog
  • developer picks available tickets selected for the sprint
  • developer creates unit tests
  • developer writes only enough code to make unit tests pass
  • developer commits code and submits a pull request to the development branch on github
  • travis-ci test code on pull request and emails development team (also shows up as an alert bell on github)
  • review process of code begins, if acceptable pull request is accepted
  • docker hub builds image for “latest” release
  • community grid auto deploys “latest” release to development systems
  • developer team then can run end to end tests and acceptance tests
  • with passing all the above the “latest” gets blessed with a semver tag
  • docker hub builds the version tagged release
  • developer team logs into the community grid and bumps up the production tagged version to the new release
  • community grid deploys the the tagged version into production.

From a developer’s POV:

  • Write tests, run tests, write lease amount of code needed
  • run tests again and do a pull request
  • Sip coffee/beer and have fun making things for DMS

From DMS’s POV:

  • Only needed predefined changes are picked and available for work
  • All the things are tested and bugs squashed before it hits the servers / membership
  • Able to track issues to which release and able to roll back to a working known working state with a push of a button
  • Sip coffee/beer knowing things get done and members are happy

From the member’s POV:

  • New features and things just work
2 Likes

How does this help us with testing the RSS feature?

https://qafoo.com/blog/007_practical_phpunit_testing_xml_generation.html

Write the unit tests with mock data and make sure the rss is rendered correctly.

Deploy to the preprod cluster and point things at it.

How can we verify what is on production is what is in master?

The Audit log. Semantic Versioning of docker images and releases in github.

Touch a littlle on this one with my reply to bill but it works like this:

  • github code in master gets tagged as 1.0.0

  • docker image in docker hub gets the same tag.

  • docker image tagged ‘latest’ is the same code in master.

  • tag code in github as 1.0.2 and it goes down the line to docker hub as tagged 1.0.2

  • travis-ci follows this same process as well under the build history log.

  • we’re also able to tag logs in the community grid based on both container and image names. (e.g. container: preprod/calendar image: dallasmakerspace/calendar:1.0.0 msg: << HTTP ERROR LOG >>
    )

1 Like

The preprod cluster will be on the internet and not just the intranet, correct?

Alright. Where do I find that? Does it take into application configuration and static files?

How will it bring in the specific configuration for the instance?

1 Like

We will have to teach a class on it, you do realize that :stuck_out_tongue:

1 Like

Application configuration and static files:

https://12factor.net/config

Store config in the environment
An app’s config is everything that is likely to vary between deploys (staging, production, developer environments, etc). This includes:

  • Resource handles to the database, Memcached, and other backing services
  • Credentials to external services such as Amazon S3 or Twitter
  • Per-deploy values such as the canonical hostname for the deploy

We’re already doing some of that in the source There’s a few areas that need to be touched. and its applied with the docker-compose.

For static content that should either be in aws s3 or the code repo’s webroot directory.

We will have to teach a class on it, you do realize that :stuck_out_tongue:

Yes! we do and that’s one of the classes I’ve been wanting to get out the door sooner than later. Its a main staple class too, one that everyone I talk to when doing avocation and recruiting wants to take.

1 Like

On Prod? I didn’t know we were running docker on prod already

2 Likes

Audit log

Here’s the links as of right now:

there’s no syslog or deployment logs at this point since what’s on the old vm is manually deployed which is not how we want to do deployments.

1 Like