How to debug CARS if things don't work well
See original GitHub issueHi:
This project is awesome. I’ve encountered some issues and hope you can help.
-
I am running CARS on a pair of Pleiades stereo imagery and the result is not very optimal. Here is CARS result:
-
In the mean time, I used another open source software to run the same Pleiades data and the result is as follow:
-
Basically, I would like to just use one software and stick to it. And I would like to understand that in this situation, what I can do to debug CARS. I am thinking checking intermediate files since CARS processes a raster by tiles. And I didn’t do bundle adjustment before running. Would it help if bundle adjustment is run beforehand?
-
I am expecting to see some intermediate files created during the processing. For example, since a raster is first split into smaller tiles, is it possible to visualize the tiled raster and see what’s going on, or check the SIFT matching points?
Thank you.
Here are the trackback:
2022-08-12 10:27:18,991 - distributed.nanny - WARNING - Worker process still alive after 3.9999990463256836 seconds, killing
2022-08-12 10:27:19,043 - distributed.nanny - WARNING - Worker process still alive after 3.999998474121094 seconds, killing
2022-08-12 10:27:19,043 - distributed.nanny - WARNING - Worker process still alive after 3.9999982833862306 seconds, killing
2022-08-12 10:27:19,044 - distributed.nanny - WARNING - Worker process still alive after 3.999998474121094 seconds, killing
2022-08-12 10:27:19,044 - distributed.nanny - WARNING - Worker process still alive after 3.999998474121094 seconds, killing
2022-08-12 10:27:19,044 - distributed.nanny - WARNING - Worker process still alive after 3.999998664855957 seconds, killing
2022-08-12 10:27:19,044 - distributed.nanny - WARNING - Worker process still alive after 3.9999988555908206 seconds, killing
2022-08-12 10:27:19,095 - distributed.nanny - WARNING - Worker process still alive after 3.9999990463256836 seconds, killing
2022-08-12 10:27:19,095 - distributed.nanny - WARNING - Worker process still alive after 3.999998664855957 seconds, killing
2022-08-12 10:27:19,096 - distributed.nanny - WARNING - Worker process still alive after 3.999998664855957 seconds, killing
2022-08-12 10:27:19,096 - distributed.nanny - WARNING - Worker process still alive after 3.9999988555908206 seconds, killing
2022-08-12 10:27:19,149 - distributed.nanny - WARNING - Worker process still alive after 3.999998474121094 seconds, killing
2022-08-12 10:27:19,149 - distributed.nanny - WARNING - Worker process still alive after 3.999998474121094 seconds, killing
2022-08-12 10:27:20,542 - tornado.application - ERROR - Exception in callback functools.partial(<bound method AsyncProcess._on_exit o
f <AsyncProcess Dask Worker process (from Nanny)>>, -15)
Traceback (most recent call last):
File "/cars/venv/lib/python3.8/site-packages/tornado/ioloop.py", line 741, in _run_callback
ret = callback()
File "/cars/venv/lib/python3.8/site-packages/distributed/process.py", line 139, in _on_exit
self._exit_callback(self)
File "/cars/venv/lib/python3.8/site-packages/distributed/nanny.py", line 695, in _on_exit
self.mark_stopped()
File "/cars/venv/lib/python3.8/site-packages/distributed/nanny.py", line 733, in mark_stopped
self.on_exit(r)
File "/cars/venv/lib/python3.8/site-packages/distributed/nanny.py", line 501, in _on_exit_sync
self._ongoing_background_tasks.call_soon(self._on_exit, exitcode)
File "/cars/venv/lib/python3.8/site-packages/distributed/core.py", line 190, in call_soon
raise AsyncTaskGroupClosedError(
distributed.core.AsyncTaskGroupClosedError: Cannot schedule a new coroutine function as the group is already closed.
2022-08-12 10:27:20,543 - tornado.application - ERROR - Exception in callback functools.partial(<bound method AsyncProcess._on_exit o
f <AsyncProcess Dask Worker process (from Nanny)>>, -15)
Traceback (most recent call last):
File "/cars/venv/lib/python3.8/site-packages/tornado/ioloop.py", line 741, in _run_callback
ret = callback()
File "/cars/venv/lib/python3.8/site-packages/distributed/process.py", line 139, in _on_exit
self._exit_callback(self)
File "/cars/venv/lib/python3.8/site-packages/distributed/nanny.py", line 695, in _on_exit
self.mark_stopped()
File "/cars/venv/lib/python3.8/site-packages/distributed/nanny.py", line 733, in mark_stopped
self.on_exit(r)
File "/cars/venv/lib/python3.8/site-packages/distributed/nanny.py", line 501, in _on_exit_sync
self._ongoing_background_tasks.call_soon(self._on_exit, exitcode)
File "/cars/venv/lib/python3.8/site-packages/distributed/core.py", line 190, in call_soon
raise AsyncTaskGroupClosedError(
distributed.core.AsyncTaskGroupClosedError: Cannot schedule a new coroutine function as the group is already closed.
And in compute_dsm_outdir/22-08-12_10h00m_compute_dsm.log
,
22-08-12 10:27:14 :: ERROR :: CARS terminated with following error
Traceback (most recent call last):
File "/cars/cars/cars.py", line 748, in main_cli
run_compute_dsm(args, dry_run)
File "/cars/cars/cars.py", line 680, in run_compute_dsm
compute_dsm.run(
File "/cars/cars/pipelines/compute_dsm.py", line 1172, in run
write_dsm.write_geotiff_dsm(
File "/cars/cars/pipelines/write_dsm.py", line 324, in write_geotiff_dsm
for future, raster_tile in tqdm(
File "/cars/venv/lib/python3.8/site-packages/tqdm/std.py", line 1195, in __iter__
for obj in iterable:
File "/cars/venv/lib/python3.8/site-packages/distributed/client.py", line 4883, in __next__
return self._get_and_raise()
File "/cars/venv/lib/python3.8/site-packages/distributed/client.py", line 4872, in _get_and_raise
raise exc.with_traceback(tb)
distributed.scheduler.KilledWorker: ('images_pair_to_3d_points-13efe864-f5ec-4d74-a056-0964cffb9b57', <WorkerState 'tcp://127.0.0.1:38559', name: 40, status: closed, memory: 0, processing: 11>)
Issue Analytics
- State:
- Created a year ago
- Comments:7 (4 by maintainers)
Top GitHub Comments
Hi David: I’ve tried it again. The result looks great. Thanks a lot.
Hi @lionlai1989, with the new API, you have to provide all the arguments through the config.json file
(cf. https://cars.readthedocs.io/en/latest/user_guide/configuration.html#orchestrator)
Regards, David