Decide on track curriculum
See original GitHub issueWe have to decide on track curriculum and make changes in config.json
accordingly (reorder exercises by their difficulty for example since they are served in order, as reported in https://github.com/exercism/python/pull/891#issuecomment-343319068).
Moreover, it would be awesome to have hints.md
for each exercise to explain certain concepts helpful for that exercise.
TODO:
- decide on a list of core exercises
- decide what unlock what (set
unlocked_by
accordingly) - reorder exercises in
config.json
- add hints if necessary for core exercises
Hopefully I’ll have some time to work on thin over the next weekends
Issue Analytics
- State:
- Created 6 years ago
- Comments:11 (11 by maintainers)
Top Results From Across the Web
OnTrack Homepage
No cost, college readiness curriculum. To learn more, click here for one-pager and here for curriculum list. How it works Sign up. BROUGHT...
Read more >Decision Processes Track - Wharton OID
The Decision Processes track establishes rigorous scientific foundations for describing, predicting, and improving the processes through which individuals ...
Read more >Detracking in K-12 Classrooms - USNews.com
Detracking means placing students with mixed abilities and academic achievement in the same classes, with the intention of exposing all ...
Read more >Tracking (education) - Wikipedia
Tracking is separating students by academic ability into groups for all subjects or certain classes and curriculum within a school. ... Track assignmentEdit....
Read more >Amid pushback, East Penn School District removes general ...
Starting with the ninth-grade class next year, students will have only honors and on-track courses from which to choose.
Read more >Top Related Medium Post
No results found
Top Related StackOverflow Question
No results found
Troubleshoot Live Code
Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start FreeTop Related Reddit Thread
No results found
Top Related Hackernoon Post
No results found
Top Related Tweet
No results found
Top Related Dev.to Post
No results found
Top Related Hashnode Post
No results found
Top GitHub Comments
I have been asked for input on this topic as it was me that complained about the parallel-letter-frequency exercise.
I am not a contributor (to this track). Only a student. There are three, non-trivial, views I would like to express. Apologies in advance if do not seem relevant to you.
Who Cares if a Level 5 exercise occurs between two Level 1 exercises ?
Well, who cares if this intimidates beginners enough they drop out ?
Well, who cares if this encourages otherwise diligent students to find ways to skip an exercise ? (Is there an ‘proper’ way to skip an exercise ?)
Well, who cares if this encourages till now honest students to submit a blank (not an incomplete but a not even attempted) solution so they can plagiarise someone else’s ?
But most off all, who cares if students do this and base their solution on a broken solution under the delusion that it must be good because it passes all the tests?
Hopefully, everyone on the project cares.
How to Detect if an Exercise is Out of Sequence ?
I assume that exercises are supposed to get more difficult gradually. An easy exercise at the end of config.json is a waste of time.
If contributors followed the guidelines diligently and adjusted the ‘canonical’ difficulty level to something appropriate for their track and configlet gave no warnings to suggest anything amiss with the order of entries in config.json you would still have a mechanism that is only as good as the assigned level of difficulty. Changing the level of difficulty does not make an exercise easier or more difficult.
It’s the GIGO principle.
The Python track has lots of students. Some statistical analysis of submissions might be meaningful.
A simple analysis, ordered by config,json, of how many students attempt each exercise each week should show a gradual decline due to natural wastage. A large drop between two consecutive exercises might be worth a closer look.
Likewise an analysis of the average number of iterations per student might suggest exercises that many students don’t take in their stride. Likewise how many first (and only) submissions do not pass the tests.
If such analyses don’t suggest some exercises are giving students more trouble than others, then there is no good reason to change anything.
It’s the don’t fix it until you know is it broken (and can test the fix) principle.
Testing the Untestable ?
The parallel_letter_test does not test for parallel execution. I submitted a ‘test passing’ solution that makes no attempt to execute anything in parallel. More than one of the solutions I examined did likewise.
I am not criticising the contributor. I am asking why is this test case in the Python track at all ? What purpose does it serve ?
I am not familiar with Python’s concurrent processing modules but I do know something of the issues involved. Of the other solutions I examined, I judged more than half were broken.
Not good examples except perhaps as how not to do things but how is the level 1 student to know ?
The pyunit test framework does not, as far as I know, have any support for testing concurrent execution. I thought there might be one that does but I could not find it.
The tests do not run for long enough to find those broken solutions that update a single counter without any protection against ‘lost updates’.
The tests do not measure execution times with and without parallel execution so don’t find those broken solutions whose protection against ‘lost updates’ effectively serialises computation of multiple execution threads.
The tests do not even check that the solution imports an appropriate Python module.
Visualization tool by @jonmcalder (https://github.com/exercism/discussions/issues/175#issuecomment-318243990)