<div dir="ltr">Hi John, great to hear from you. I assume you are asking about image augmentation and pre-processing.<div>There are more or less standard steps to organise the downloaded images. If you google you should be able to find suitable scripts. I recalled I followed the ones provided by Soumith Chintala but he also used bits provided by someone else. The thing is you do it once and then forget about it. You can also remove some bad images. I recall there are some which give a warning on read due to bad EXIF info etc, these can be over-written. Cropping to the relevant area using the bounding boxes might be an interesting option.</div><div>Augmentation is more interesting. There are many papers covering the overall training process from scratch. Reading "Training ImageNet in one hour " could be one starting option <a href="https://arxiv.org/abs/1706.02677" target="_blank">https://arxiv.org/abs/1706.02677</a></div><div>Then follow the references on data augmentation and you'll end up with a few key papers which everyone references.</div><div>The ResNet "school" does things slightly differently than VGG. </div><div>Horovod provides examples for starters <a href="https://github.com/horovod/horovod/tree/master/examples" target="_blank">https://github.com/horovod/horovod/tree/master/examples</a></div><div>What they don't do is random cropping.</div><div>Also keep in mind how the final quality of the training is assessed - random crop, central crop, nine crops + reflection etc.</div><div><br></div><div>Thanks for the pointer to the new meetup. I love both HPC and AI. However I don't see the announcement about the meeting on 21 August. Hope it will appear later.</div></div><br><div class="gmail_quote"><div dir="ltr" class="gmail_attr">On Sat, 29 Jun 2019 at 07:49, John Hearns via Beowulf <<a href="mailto:beowulf@beowulf.org">beowulf@beowulf.org</a>> wrote:<br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div dir="ltr">Igor, if there are any papers published on what you are doing with these images I would be very interested.<div>I went to the new London HPC and AI Meetup on Thursday, one talk was by Odin Vision which was excellent.</div><div>Recommend the new Meetup to anyone in the area. Next meeting 21st August.</div><div><br></div><div>And a plug to Verne Global - they provided free Icelandic beer.</div></div><br><div class="gmail_quote"><div dir="ltr" class="gmail_attr">On Sat, 29 Jun 2019 at 05:43, INKozin via Beowulf <<a href="mailto:beowulf@beowulf.org" target="_blank">beowulf@beowulf.org</a>> wrote:<br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div dir="auto">Converting the files to TF records or similar would be one obvious approach if you are concerned about meta data. But then I d understand why some people would not want that (size, augmentation process). I assume you are are doing the training in a distributed fashion using MPI via Horovod or similar and it might be tempting to do file partitioning across the nodes. However doing so introduces a bias into minibatches (and custom preprocessing). If you partition carefully by mapping classes to nodes it may work but I also understand why some wouldn't be totally happy with that. Ive trained keras/TF/horovod models on imagenet using up to 6 nodes each with four p100/v100 and it worked reasonably well. As the training still took a few days copying to local NVMe disks was a good option.<div dir="auto">Hth</div></div><br><div class="gmail_quote"><div dir="ltr" class="gmail_attr">On Fri, 28 Jun 2019, 18:47 Mark Hahn, <<a href="mailto:hahn@mcmaster.ca" target="_blank">hahn@mcmaster.ca</a>> wrote:<br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex">Hi all,<br>
I wonder if anyone has comments on ways to avoid metadata bottlenecks<br>
for certain kinds of small-io-intensive jobs. For instance, ML on imagenet,<br>
which seems to be a massive collection of trivial-sized files.<br>
<br>
A good answer is "beef up your MD server, since it helps everyone".<br>
That's a bit naive, though (no money-trees here.)<br>
<br>
How about things like putting the dataset into squashfs or some other <br>
image that can be loop-mounted on demand? sqlite? perhaps even a format<br>
that can simply be mmaped as a whole?<br>
<br>
personally, I tend to dislike the approach of having a job stage tons of<br>
stuff onto node storage (when it exists) simply because that guarantees a<br>
waste of cpu/gpu/memory resources for however long the stagein takes...<br>
<br>
thanks, mark hahn.<br>
-- <br>
operator may differ from spokesperson. <a href="mailto:hahn@mcmaster.ca" rel="noreferrer" target="_blank">hahn@mcmaster.ca</a><br>
_______________________________________________<br>
Beowulf mailing list, <a href="mailto:Beowulf@beowulf.org" rel="noreferrer" target="_blank">Beowulf@beowulf.org</a> sponsored by Penguin Computing<br>
To change your subscription (digest mode or unsubscribe) visit <a href="https://beowulf.org/cgi-bin/mailman/listinfo/beowulf" rel="noreferrer noreferrer" target="_blank">https://beowulf.org/cgi-bin/mailman/listinfo/beowulf</a><br>
</blockquote></div>
_______________________________________________<br>
Beowulf mailing list, <a href="mailto:Beowulf@beowulf.org" target="_blank">Beowulf@beowulf.org</a> sponsored by Penguin Computing<br>
To change your subscription (digest mode or unsubscribe) visit <a href="https://beowulf.org/cgi-bin/mailman/listinfo/beowulf" rel="noreferrer" target="_blank">https://beowulf.org/cgi-bin/mailman/listinfo/beowulf</a><br>
</blockquote></div>
_______________________________________________<br>
Beowulf mailing list, <a href="mailto:Beowulf@beowulf.org" target="_blank">Beowulf@beowulf.org</a> sponsored by Penguin Computing<br>
To change your subscription (digest mode or unsubscribe) visit <a href="https://beowulf.org/cgi-bin/mailman/listinfo/beowulf" rel="noreferrer" target="_blank">https://beowulf.org/cgi-bin/mailman/listinfo/beowulf</a><br>
</blockquote></div>