Reduce complexity for dev testing setup & update CONTRIBUTING.md
See original GitHub issueCurrently, looking at our CONTRIBUTING.md
, I see a lot of manual set up steps in order to run the test suite. This is a very unwelcoming first impression, and we could probably hide a lot of these details from the contributor anyways.
This set up process could be simplified by automating using docker, for example. I’m open to other suggestions also.
Issue Analytics
- State:
- Created 6 years ago
- Reactions:1
- Comments:10 (9 by maintainers)
Top Results From Across the Web
community/CONTRIBUTING.md at master - GitHub
size/S simple complexity, good for novices to project (4-10 hours) · size/M moderate complexity (10-20 hours) · size/L high complexity (20+ hours) ·...
Read more >Best practices to manage an open source project
Learn how you can better manage your open source project and make sure developers have a great experience when contributing to it.
Read more >Contributing to Chromium - Google Git
During the lifetime of a review, you may want to rebase your change onto a newer source revision to minimize merge conflicts. The...
Read more >What Is Code Quality? Overview + How to Improve ... - Perforce
Testability can be measured based on how many test cases you need to find potential faults in the system. Size and complexity of...
Read more >Contributing — scikit-learn 1.2.0 documentation
In particular helping to improve, triage, and investigate issues and reviewing other developers' pull requests are very valuable contributions that decrease ...
Read more >
Top Related Medium Post
No results found
Top Related StackOverflow Question
No results found
Troubleshoot Live Code
Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start Free
Top Related Reddit Thread
No results found
Top Related Hackernoon Post
No results found
Top Related Tweet
No results found
Top Related Dev.to Post
No results found
Top Related Hashnode Post
No results found
Bookshelf is built with knex plugged in as the module used for database connection and execution, which improves quality in bookshelf. This provides an opportunity for lowered coupling and increased cohesion in the sense that bookshelf no longer needs to concern itself with the manner in which database connection and execution occurs, as long as knex is well-tested and can be trusted to function as documented.
Subsequently, we can tighten up the focus (cohesion) of bookshelf as a library to spread only down to the API where knex will take over. It doesn’t need to concern itself with testing that knex reacts as expected with the correct inputs. I think this is a strong argument for removing the test scaffolding related to actually running a variety of databases and measuring the database effects of bookshelf code.
Bookshelf can expose the knex query builder without implementing or verifying any of that functionality itself.
That means that the knex library can be tested independently to provide verification and bookshelf does not need to concern itself with guaranteeing that knex’s functionality is correct. Knex should provide those guarantees, and bookshelf is not made any less reliable by proxying those guarantees out to its users.