question-mark
Stuck on an issue?

Lightrun Answers was designed to reduce the constant googling that comes with debugging 3rd party libraries. It collects links to all the places you might be looking at while hunting down a tough bug.

And, if you’re still stuck at the end, we’re happy to hop on a call to see how we can help out.

How to evaluate the dense traffic in nocrash?

See original GitHub issue

Make sure you have read FAQ before posting. Thanks! Hello again! Thank for so much again for helping me fix the error, and now I am glad to tell you that I have successfully trained the no_crash model here. Now I have collected about 150K frames (almost 186GB) files, and test on Town01 and train weather. Here is the report. https://wandb.ai/sunhaoyi/carla_train_phase2/reports/Project-Dashboard–Vmlldzo3NjY3MzE?accessToken=gwm97gty3n5dvf24l82dk3qxz24ller7ndn5128kzjif4qyerppqlk9wnnwnp220 . And the result are as follows:

Town01,0,1,78,225,71.45,0,409.5
Town01,0,3,78,225,100.0,2,452.25
Town01,0,6,78,225,29.41,0,58.1
Town01,0,8,78,225,27.94,0,63.55
Town01,0,1,103,21,100.0,0,188.0
Town01,0,3,103,21,100.0,0,187.85
Town01,0,6,103,21,0.0,0,180.05
Town01,0,8,103,21,60.82,0,328.35
Town01,0,1,127,87,100.0,0,232.55
Town01,0,3,127,87,100.0,0,233.65
Town01,0,6,127,87,72.93,0,367.55
Town01,0,8,127,87,100.0,0,232.8
Town01,0,1,19,103,28.63,0,243.6
Town01,0,3,19,103,100.0,0,266.1
Town01,0,6,19,103,23.7,0,212.15
Town01,0,8,19,103,100.0,0,356.4
Town01,0,1,230,210,100.0,0,36.95
Town01,0,3,230,210,61.43,0,136.05
Town01,0,6,230,210,100.0,0,36.95
Town01,0,8,230,210,100.0,0,35.65
Town01,0,1,250,190,27.93,0,209.1
Town01,0,3,250,190,100.0,0,140.35
Town01,0,6,250,190,21.53,0,203.2
Town01,0,8,250,190,21.31,0,201.45
Town01,0,1,220,118,57.3,0,273.65
Town01,0,3,220,118,57.3,0,273.65
Town01,0,6,220,118,29.49,0,210.0
Town01,0,8,220,118,55.27,0,238.65
Town01,0,1,200,224,100.0,0,255.25
Town01,0,3,200,224,100.0,0,256.5
Town01,0,6,200,224,0.15,0,183.0
Town01,0,8,200,224,41.71,0,331.05
Town01,0,1,11,17,100.0,0,134.4
Town01,0,3,11,17,100.0,0,134.85
Town01,0,6,11,17,100.0,0,134.0
Town01,0,8,11,17,100.0,0,135.95
Town01,0,1,78,245,100.0,0,153.5
Town01,0,3,78,245,100.0,0,153.75
Town01,0,6,78,245,52.75,0,60.4
Town01,0,8,78,245,48.52,0,65.0
Town01,0,1,3,175,31.46,0,274.3
Town01,0,3,3,175,100.0,0,169.1
Town01,0,6,3,175,23.38,0,231.95
Town01,0,8,3,175,7.7,0,26.0
Town01,0,1,92,112,100.0,0,221.0
Town01,0,3,92,112,100.0,0,220.55
Town01,0,6,92,112,100.0,0,221.75
Town01,0,8,92,112,100.0,0,219.8
Town01,0,1,233,238,100.0,0,223.25
Town01,0,3,233,238,100.0,0,224.25
Town01,0,6,233,238,100.0,0,224.5
Town01,0,8,233,238,100.0,0,223.15
Town01,0,1,4,54,100.0,0,164.9
Town01,0,3,4,54,100.0,0,164.8
Town01,0,6,4,54,100.0,0,164.5
Town01,0,8,4,54,100.0,0,165.05

1. Any suggestion about collecting data?

It seems that the trained model is not good as your pretrained model, I guess maybe the data we collected in not enough. And I only changed the batchsize 64.·Besides, I have checked the sematic segmentation, maybe because of the insufficient data, it is hard to segment Pedestrian and trafficlight, and the average of train town + train weather is about 70-80 score.

2. How to change the nocrash traffic parameters?

After checking the args, I cannot find the [empty, regular and dense] traffic settings, and I see the default is empty. And after checking the code here, we need to change the car_amountsand ped_amounts, is there any way to change them?

3. Why do we need to train the semantic segmentation?

I have read your code and found the loss in main_model, I guess the segmentation helps to train the model here?

loss = act_loss + weight * seg_loss

Due to my poor comprehension, I am a little confused on the policy distillation. This is very similar to the form of loss in knowledge distillation, that is distillation loss + student loss. And both the input data is wide + narr images, I think the output of teacher net is Q-table and output of the student is Policy.Then the distillation loss is KL of act_outputs and act_probs, and student loss is the cross_entropy of wide_seg_outputs and the Ground Truthwide_sems. What if we only use the act? Such as loss = act_loss + weight * distill_act_loss. Besides, we only need the Q-table to make action and although sometimes segmentation seems bad, it always shows the right decision in the Q-table.(For example, there is a person in front, but the segmentation did not shows red, the Q-table shows brake=0.99).

That’s all, I will read your paper and code again. And later I will collect again for the leardership, from the last issue, it really needs 1M frames. Thank you so much!

Issue Analytics

  • State:closed
  • Created 2 years ago
  • Comments:6 (3 by maintainers)

github_iconTop GitHub Comments

2reactions
SunHaoOnecommented, Jun 20, 2021

Hi, where is the released config_nocrash.yaml file?

Hi, you can find it in the readme.md and open the nocrash models. Pretrained weights Leaderboard models NoCrash models Or open https://utexas.box.com/s/8lcl7istkr23dtjqqiyu0v8is7ha5u2r you can find this yaml

1reaction
dotchencommented, Jun 10, 2021

Thank you for providing the details. Here are the answers:

  1. Any suggestion about collecting data?

First of all, do make sure you have the correct setup in the config.yaml. It should match the released config_nocrash.yaml. Can you send me the config.yaml of your corresponding stages?

Second of all, make sure you are training the model to enough epochs. The released model used 16 epochs (as shown by its name main_model_16./th), and from the link you sent me your model definitely has not converged yet, as shown by the predicted segmentation map.

  1. How to change the nocrash traffic parameters?

If you look at the code, you will find the traffic density is a parameter along with the weather and route. When you specifytown=Town01 and weather=train, all three traffic densities will be evaluated sequentially.

  1. Why do we need to train semantic segmentation?

If you read the paper in table 5 section 5, we have an ablation on the effect of semantic segmentation regularization. We found that the driving policy is much better at generalization when the feature is regularized with the segmentation auxialliary loss.

Then the distillation loss is KL of act_outputs and act_probs, and student loss is the cross_entropy of wide_seg_outputs and the Ground Truthwide_sems

act_loss is the KL loss here.

Read more comments on GitHub >

github_iconTop Results From Across the Web

[2108.12134] WAD: A Deep Reinforcement Learning Agent for ...
However, their study was limited to a dense traffic roundabout scenario. ... They showcased outstanding results on CARLA and NoCrash benchmarks.
Read more >
Traffic Density versus Rear-End Crash Risk on Freeways
This paper illustrates how both issues can be addressed by supplementing standard statistical modeling with models describing crash mechanisms.
Read more >
Analyses of Rear-End Crashes and Near-Crashes in the 100 ...
The 100-Car Study collected unique pre-crash data that might help to overcome the ... Distribution of traffic density conditions for near-crashes (N=32), ...
Read more >
zhejz/carla-roach: Roach: End-to-End Urban Driving ... - GitHub
The "Leaderboard" we evaluated on is an offline version of the CARLA Leaderboard. ... nocrash_dense : NoCrash, dense traffic, all 4 conditions. LeaderBoard:....
Read more >
Evaluating the Safety Risk of Rural Roadsides Using a ... - NCBI
(2) Access point density is introduced as a relatively new risk factor to ... Several studies used BN to assess traffic safety, traffic...
Read more >

github_iconTop Related Medium Post

No results found

github_iconTop Related StackOverflow Question

No results found

github_iconTroubleshoot Live Code

Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start Free

github_iconTop Related Reddit Thread

No results found

github_iconTop Related Hackernoon Post

No results found

github_iconTop Related Tweet

No results found

github_iconTop Related Dev.to Post

No results found

github_iconTop Related Hashnode Post

No results found