Backend

Applying the models to the article in question

Google Cloud Platform

We used the GCP to upload model weights, deploy our code to run the models, and send the information back to the client. Our main model evaluation code was written in Python and the communication between server and client was written in JavaScript.

Storing Information in the Cloud

We stored our trained model weights for both the CNN and BiLSTM and the word embedding dictionary in the GCP that could be easily downloaded in the backend. This allowed us to quickly deploy our models and access our word embedding dictionary that normally would have taken a very long time to newly download each time.

CNN Evaluation

Took an article’s title and body as input that was converted to word embeddings. Fed these embeddings into the CNN that output whether the model thought the article was real or fake

BiLSTM Evaluation

Using the results of the search, we turned the new bodies into word embeddings and fed the original article’s title embeddings and these new body embeddings into the BiLSTM. The output is the stance that each new article held in relation to the one in question.

Internet Search for Related Articles

We ran a search on the internet using the article’s title to find related news stories. We scraped four news links and retrieved the article bodies of each.

JavaScript Handling

Sends a post request to the GCP containing the article information needed for model evaluation. A JSON is sent back containing the CNN probability and class output as well as the four links used in the BiLSTM and each one’s stance output.