Skip to content

Latest commit

 

History

History
61 lines (47 loc) · 2.69 KB

File metadata and controls

61 lines (47 loc) · 2.69 KB

final-project

National Action Council for Minorities in Engineering(NACME) Google Applied Machine Learning Intensive (AMLI) at the University of Kentucky

Developed by:

Mentor:

Description

Using neural networks to generate story captions of images in the VIST dataset.

Goal

image

The Difference between Image Captioning and Visual Storytelling

image image

Usage instructions

The `vist_requirements.txt` file should list all Python libraries that your notebooks

depend on, and they will be installed using:

pip install -r vist_requirements.txt
  1. Fork this repo
  2. Change directories into your project
  3. On the command line, type pip3 install requirements.txt
  4. ....

How To Use

Preprocessing

Pickle images that you are using, using Resnet50FeatureExtraction.ipynb and place them in a folder as your pickled image directory.

Make sure that you have downloaded the dictionaries folder to have the keys used in the main code.

Code Implementation

image

Model

From there, you should have everything to run the model. Below you'll see the architechture that the model follows. image

References

https://github.com/sultanalnahian/Reverse-Frogger-A3PS/tree/main/advice%20generator