top of page

Collecting and analyzing egocentric videos from children

If you're looking to collect your own dataset with a new camera, please e-mail me at bria [at] — we're finalizing a new high-resolution camera using the GoPro Hero Bones camera (see Research page for photo) and more information at

If you're looking for existing, available egocentric video datasets, you can:

  • find the paper on the SAYCam dataset here

  • get access to the SAYCam dataset through Databrary

  • get access to the in-lab dataset through Databrary

If you're looking for information on how to analyze social information in video datasets, you can:

  • First, see discussion section of Long et al., in press DevPsych — there are some limitations to this method!

  • If you want to run these models on your data, first check out the OpenPose repository we used 

  • Look through this repository which has instructions to follow our pipeline — we applied the algorithm without fine-tuning and then used face/hand keypoints.

  • You'll need access to a server with a GPU to run OpenPose (there may be easier algorithms/codebases available, let me know if you find one). Note that this is a serious data management issue as OP produces 1 file/frame of each video, and keypoints for every person in every frame.

Collecting & analyzing digital children's drawings

If you're looking for our available drawings datasets, please hang tight — we will release the large datasets with publication. Send me an e-mail  and I will add you to a notification list.

If you want to analyze your own digital drawing data:

  • For model embeddings

    • I recommend this repository for getting OpenAI's CLIP model embeddings very easily

    • We used custom PyTorch code, but THINGSvision is a great new resource for getting VGG-19 and other DNN model embeddings

  • If you want to get stroke annotations, you can browse our codebase but do note that it is not intended for public use 


If you're looking for to collect your own drawing data: 

Generating and using texforms — see dedicated page

bottom of page