Phodexr is a project that aims to enable searching through images with text. However, the search should be based solely on the content of the image and not the text context around it. To tackle this problem, phodexr uses CLIP which is a model released by OpenAI to connect two different embedding spaces.Read More
CLIP model based off the original paper here and trained from scratch.
Submit a query to see the predictions!
We are a group of students at UC Berkeley working on tackling real world problems with artificial intelligence.