bugfix: leaked semaphore error (#309)

* use config for n_cpu

* rm import

* fix process loop

* unuse mp.spawn

ref. https://discuss.pytorch.org/t/how-to-fix-a-sigsegv-in-pytorch-when-using-distributed-training-e-g-ddp/113518/10

* fix commentout
This commit is contained in:
N. Hiroto
2023-05-19 18:56:06 +09:00
committed by GitHub
parent 563c64ded9
commit 080b7cdc31
4 changed files with 19 additions and 17 deletions

View File

@@ -115,10 +115,10 @@ class PreProcess:
p = multiprocessing.Process(
target=self.pipeline_mp, args=(infos[i::n_p],)
)
p.start()
ps.append(p)
for p in ps:
p.join()
p.start()
for i in range(n_p):
ps[i].join()
except:
println("Fail. %s" % traceback.format_exc())