Skip to content
View PhillipMerritt's full-sized avatar

Block or report PhillipMerritt

Block user

Prevent this user from interacting with your repositories and sending you notifications. Learn more about blocking users.

You must be logged in to block users.

Please don't include any personal information such as legal names or email addresses. Maximum 100 characters, markdown supported. This note will be visible to only you.
Report abuse

Contact GitHub support about this user’s behavior. Learn more about reporting abuse.

Report abuse
PhillipMerritt/README.md

Phillip Merritt Portfolio

  • Allows users to analyze the changes in sentiment over time in online communities

    e.g. A developer that wants to see an overview of the reaction to recently released update could generate a chart of sentiment over the days before and after the update.

  • Full stack webapp created using Vue.JS and Flask

  • Comments from the specified subreddit and time range are pulled from the pushshift API

  • Those comments are sent to the backend where sentiment analysis is performed from a variety of NLP packages

  • The average sentiment for discrete time segments is sent back to the frontend where a chart of the data is generated

  • Unity program that allows users to create a 3D vizualization of the internal state of an uploaded neural network and sample inputs

  • Written in C# and python

  • The python hook takes the user uploaded tensorflow model and retrieves the internal state of each layer for each input then encodes this data in JSON

  • The convolutional layer activations and a sampling of the dense layer connections are read from the JSON encoding and rendered in 3D.

  • Working implementation of the AlphaZero reinforcement learning algorithm in python
  • Uses a Information Set Monte Carlo Tree Search algorithm which is a modified MCTS that enables the usage of tree searches for games with hidden information like dominoes
  • Game state is encoded in binary vectors which is passed to the value prediction Keras CNN to simulate a MCTS rollout
  • Self-play, training, and evaluation are all performed in parallel
  • Agents trained in Texas 42 won against the baseline 95% of the time and performed at a level sufficient for our research at the time

Popular repositories Loading

  1. Alpha_Domino Alpha_Domino Public

    Python 2

  2. SubredditSentimentTracker SubredditSentimentTracker Public

    Vue 2 1

  3. genreBlender genreBlender Public

    Jupyter Notebook 1

  4. phillipmerritt.github.io phillipmerritt.github.io Public

    Tool for SWGOH players to see how long it will take to farm characters using the metrics they provide.

    JavaScript

  5. CSCE4201_ReinforcementLearning CSCE4201_ReinforcementLearning Public

    Python 1

  6. keypoint_model keypoint_model Public