diff --git a/Computer Vision/Gesture-Controlled-Snake-Game/.gitignore b/Computer Vision/Gesture-Controlled-Snake-Game/.gitignore
new file mode 100644
index 00000000..3226168c
--- /dev/null
+++ b/Computer Vision/Gesture-Controlled-Snake-Game/.gitignore
@@ -0,0 +1,9 @@
+#Byte-compiled / optimized / DLL files
+__pycache__/
+*.py[cod]
+
+#PyInstaller
+*.manifest
+*.spec
+build/
+dist/
diff --git a/Computer Vision/Gesture-Controlled-Snake-Game/LICENSE b/Computer Vision/Gesture-Controlled-Snake-Game/LICENSE
new file mode 100644
index 00000000..7a8bb253
--- /dev/null
+++ b/Computer Vision/Gesture-Controlled-Snake-Game/LICENSE
@@ -0,0 +1,21 @@
+MIT License
+
+Copyright (c) 2018 Mohit Singh
+
+Permission is hereby granted, free of charge, to any person obtaining a copy
+of this software and associated documentation files (the "Software"), to deal
+in the Software without restriction, including without limitation the rights
+to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
+copies of the Software, and to permit persons to whom the Software is
+furnished to do so, subject to the following conditions:
+
+The above copyright notice and this permission notice shall be included in all
+copies or substantial portions of the Software.
+
+THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
+AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
+LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
+OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
+SOFTWARE.
diff --git a/Computer Vision/Gesture-Controlled-Snake-Game/README.md b/Computer Vision/Gesture-Controlled-Snake-Game/README.md
new file mode 100644
index 00000000..9d94bbb9
--- /dev/null
+++ b/Computer Vision/Gesture-Controlled-Snake-Game/README.md
@@ -0,0 +1,70 @@
+## Gesture Controlled Snake Game
+This program can be used to play a Snake Game (or like it) by detecting hand gestures to control movement of the snake on the screen. It has been implemented in Python using OpenCV library. Although, the demo shown here refers to playing a Snake Game, it can be used to play any game or control any application using hand gestures only.
+This program shall be used to play a Snake Game by detecting hand gestures to control movement of the snake on the screen. It has been implemented in Python using OpenCV library. Although, the demo shown here refers to playing a Snake Game, it can be used to play any game or control any application using hand gestures only.
+
+
+
+Watch on Youtube
+
+[Gesture Controlled Snake Game Using OpenCV and Python](http://www.youtube.com/watch?v=PE_rgc2K0sg)
+
+## Getting Started
+ ### Prerequisites
+ The program depends on the following libraries-
+
+ numpy==1.15.2
+ imutils==0.5.1
+ PyAutoGUI==0.9.38
+ opencv_python==3.4.3.18
+ pygame==1.9.4
+
+Install the libraries using `pip install -r requirements.txt`
+
+### Installing
+
+ 1. Clone the repository in your local computer.
+ 2. Use `python ` to run specific files, which are described below.
+
+## Built With
+
+ - [OpenCV](https://opencv.org/) - The Open Computer Vision Library
+ - [PyAutoGUI](https://pypi.org/project/PyAutoGUI/) - Cross platform GUI Automation Python Module
+
+## Contributing
+Please feel free to contribute to the project and Pull Requests are open for everyone willing to improve upon the project. Feel free to provide any suggestions.
+
+## License
+This project is licensed under the MIT License - see the [LICENSE.md](https://github.com/mohitwildbeast/Gesture-Controlled-Snake-Game/blob/master/LICENSE) file for details.
+## Acknowledgements
+
+ - The inspiration for the project came through PyImageSearch blog's article- [OpenCV Tracking Object in Images](https://www.pyimagesearch.com/2015/09/21/opencv-track-object-movement/) blog post.
+ - PyAutoGUI helped a lot for keyboard automation tasks.
+## File Contents
+The entire project has been made using a bottom up approach, and the project consists of the following files which are described below-
+
+ ## [Object Detection](https://github.com/mohitwildbeast/Gesture-Controlled-Snake-Game/blob/master/object-detection.py)
+
+> This script detects a object of specified object colour from the webcam video feed. Using OpenCV library for vision tasks and HSV color space for detecting object of given specific color.
+>See the demo on Youtube - [ Object Detection and Motion Tracking in OpenCV](https://www.youtube.com/watch?v=mtGBuMlusXQ)
+
+
+
+ ## [Object Tracking and Direction Detection](https://github.com/mohitwildbeast/Gesture-Controlled-Snake-Game/blob/master/object-tracking-direction-detection.py)
+
+> This script can detect objects specified by the HSV color and also sense
+direction of their movement.
+
+> See the demo on Youtube - [Object Tracking and Direction Detection using OpenCV](https://www.youtube.com/watch?v=zapq9QT9uwc)
+
+ ## [Game Control Using Object Tracking (Multithreaded Implementation)](https://github.com/mohitwildbeast/Gesture-Controlled-Snake-Game/blob/master/game-control-using-object-tracking-multithreaded.py)
+> This script can detect objects specified by the HSV color and also sense the
+direction of their movement.Using this script a Snake Game which has been loaded in the repo, can be played. Implemented using OpenCV. Uses seperate thread for reading frames through OpenCV.
+
+>See the demo on Youtube - [Gesture Controlled Snake Game Playing with OpenCV and Computer Vision](https://www.youtube.com/watch?v=PE_rgc2K0sg)
+
+## Snake Game
+>The Snake Game present in this video, has been taken from my previous repository [SnakeFun](https://github.com/mohitwildbeast/SnakeFun). The files from the game corrrespond to SnakeFun.py and settingsSnakeFun.py
+
+>Run the game using the code ```python SnakeFun.py```
+
+
diff --git a/Computer Vision/Gesture-Controlled-Snake-Game/SnakeGame.py b/Computer Vision/Gesture-Controlled-Snake-Game/SnakeGame.py
new file mode 100644
index 00000000..ed8cf1ff
--- /dev/null
+++ b/Computer Vision/Gesture-Controlled-Snake-Game/SnakeGame.py
@@ -0,0 +1,182 @@
+import random
+import pygame
+import sys
+from pygame.locals import *
+from settingsSnakeFun import *
+
+def main():
+ global CLOCK, SCREEN, FONT
+
+ pygame.init()
+ CLOCK = pygame.time.Clock()
+ SCREEN = pygame.display.set_mode((WINDOWWIDTH, WINDOWHEIGHT))
+ FONT = pygame.font.Font('freesansbold.ttf', 18)
+ pygame.display.set_caption('Snake Game')
+
+ showStartScreen()
+ while True:
+ pygame.mixer.music.play(-1,0.0)
+ runGame()
+ pygame.mixer.music.stop()
+ showGameOverScreen()
+
+def runGame():
+ #Set a random starting point
+ startx = random.randint(5, CELLWIDTH - 6)
+ starty = random.randint(5, CELLHEIGHT - 6)
+ global wormCoords
+ wormCoords = [{'x' : startx, 'y' : starty}, {'x': startx - 1, 'y':starty}, {'x':startx - 2, 'y':starty}]
+ direction = RIGHT
+
+ apple = getRandomLocation()
+
+ while True:
+ for event in pygame.event.get():
+ if event.type == QUIT:
+ terminate()
+ elif event.type == KEYDOWN:
+ if (event.key == K_LEFT or event.key == K_a) and direction != RIGHT:
+ direction = LEFT
+ elif (event.key == K_RIGHT or event.key == K_d) and direction != LEFT:
+ direction = RIGHT
+ elif (event.key == K_UP or event.key == K_w) and direction != DOWN:
+ direction = UP
+ elif (event.key == K_DOWN or event.key == K_s) and direction != UP:
+ direction = DOWN
+ elif event.key == K_ESCAPE:
+ terminate()
+#Collision Detection
+ #Check Collision with edges
+ if wormCoords[HEAD]['x'] == -1 or wormCoords[HEAD]['x'] == CELLWIDTH or wormCoords[HEAD]['y'] == -1 or wormCoords[HEAD]['y'] == CELLHEIGHT:
+ return
+ #Check Collision with snake's body
+ for wormBody in wormCoords[1:]:
+ if wormBody['x'] == wormCoords[HEAD]['x'] and wormBody['y'] == wormCoords[HEAD]['y']:
+ return
+ #Check Collision with Apple
+ if wormCoords[HEAD]['x'] == apple['x'] and wormCoords[HEAD]['y'] == apple['y']:
+ APPLEEATSOUND.play()
+ apple = getRandomLocation()
+ else:
+ del wormCoords[-1]
+
+#Moving the Snake
+ if direction == UP:
+ newHead = {'x': wormCoords[HEAD]['x'], 'y': wormCoords[HEAD]['y'] - 1}
+ elif direction == DOWN:
+ newHead = {'x': wormCoords[HEAD]['x'], 'y': wormCoords[HEAD]['y'] + 1}
+ elif direction == RIGHT:
+ newHead = {'x': wormCoords[HEAD]['x'] + 1, 'y': wormCoords[HEAD]['y']}
+ elif direction == LEFT:
+ newHead = {'x': wormCoords[HEAD]['x'] - 1, 'y': wormCoords[HEAD]['y']}
+ wormCoords.insert(0, newHead)
+
+#Drawing the Screen
+ SCREEN.fill(BGCOLOR)
+ drawGrid()
+ drawWorm(wormCoords)
+ drawApple(apple)
+ drawScore((len(wormCoords) - 3) * 10)
+ pygame.display.update()
+ CLOCK.tick(FPS)
+
+def getTotalScore():
+ return ((len(wormCoords) - 3) * 10)
+
+def drawPressKeyMsg():
+ pressKeyText = FONT.render('Press A Key To Play', True, YELLOW)
+ pressKeyRect = pressKeyText.get_rect()
+ pressKeyRect.center = (WINDOWWIDTH - 200, WINDOWHEIGHT - 100)
+ SCREEN.blit(pressKeyText, pressKeyRect)
+
+def drawSettingsMsg():
+ SCREEN.blit(SETTINGSBUTTON, (WINDOWWIDTH - SETTINGSBUTTON.get_width(), WINDOWHEIGHT - SETTINGSBUTTON.get_height()))
+
+def checkForKeyPress():
+ if len(pygame.event.get(QUIT)) > 0:
+ terminate()
+
+ keyUpEvents = pygame.event.get(KEYUP)
+ if len(keyUpEvents) == 0:
+ return None
+ if keyUpEvents[0].key == K_ESCAPE:
+ terminate()
+ return keyUpEvents[0].key
+
+def showStartScreen():
+ titlefont = pygame.font.Font('freesansbold.ttf', 100)
+ titleText = titlefont.render('SNAKE FUN', True, DARKGREEN)
+ while True:
+ SCREEN.fill(BGCOLOR)
+ titleTextRect = titleText.get_rect()
+ titleTextRect.center = (WINDOWWIDTH / 2, WINDOWHEIGHT / 2)
+ SCREEN.blit(titleText, titleTextRect)
+
+ drawPressKeyMsg()
+ if checkForKeyPress():
+ pygame.event.get()
+ return
+ pygame.display.update()
+ CLOCK.tick(FPS)
+
+def terminate():
+ pygame.quit()
+ sys.exit()
+
+def getRandomLocation():
+ return {'x': random.randint(0, CELLWIDTH - 1), 'y': random.randint(0, CELLHEIGHT - 1)}
+
+def showGameOverScreen():
+ gameOverFont = pygame.font.Font('freesansbold.ttf', 100)
+ gameOverText = gameOverFont.render('Game Over', True, WHITE)
+ gameOverRect = gameOverText.get_rect()
+ totalscoreFont = pygame.font.Font('freesansbold.ttf', 40)
+ totalscoreText = totalscoreFont.render('Total Score: %s' % (getTotalScore()), True, WHITE)
+ totalscoreRect = totalscoreText.get_rect()
+ totalscoreRect.midtop = (WINDOWWIDTH/2, 150)
+ gameOverRect.midtop = (WINDOWWIDTH/2, 30)
+ SCREEN.fill(BGCOLOR)
+ SCREEN.blit(gameOverText, gameOverRect)
+ SCREEN.blit(totalscoreText, totalscoreRect)
+ drawPressKeyMsg()
+ pygame.display.update()
+ pygame.time.wait(1000)
+ checkForKeyPress()
+
+ while True:
+ if checkForKeyPress():
+ pygame.event.get()
+ return
+
+def drawScore(score):
+ scoreText = FONT.render('Score: %s' % (score), True, WHITE)
+ scoreRect = scoreText.get_rect()
+ scoreRect.center = (WINDOWWIDTH - 100, 30)
+ SCREEN.blit(scoreText, scoreRect)
+
+def drawWorm(wormCoords):
+ x = wormCoords[HEAD]['x'] * CELLSIZE
+ y = wormCoords[HEAD]['y'] * CELLSIZE
+ wormHeadRect = pygame.Rect(x, y, CELLSIZE, CELLSIZE)
+ pygame.draw.rect(SCREEN, YELLOW, wormHeadRect)
+
+ for coord in wormCoords[1:]:
+ x = coord['x'] * CELLSIZE
+ y = coord['y'] * CELLSIZE
+ wormSegmentRect = pygame.Rect(x, y, CELLSIZE, CELLSIZE)
+ pygame.draw.rect(SCREEN, GREEN, wormSegmentRect)
+
+def drawApple(coord):
+ x = coord['x'] * CELLSIZE
+ y = coord['y'] * CELLSIZE
+ appleRect = pygame.Rect(x, y, CELLSIZE, CELLSIZE)
+ pygame.draw.rect(SCREEN, RED, appleRect)
+
+def drawGrid():
+ for x in range(0, WINDOWWIDTH, CELLSIZE):
+ pygame.draw.line(SCREEN, DARKGRAY, (x, 0), (x, WINDOWHEIGHT))
+ for y in range(0, WINDOWHEIGHT, CELLSIZE):
+ pygame.draw.line(SCREEN, DARKGRAY, (0, y), (WINDOWWIDTH, y))
+
+if __name__ == '__main__':
+ main()
diff --git a/Computer Vision/Gesture-Controlled-Snake-Game/game-control-using-object-tracking-multithreaded.py b/Computer Vision/Gesture-Controlled-Snake-Game/game-control-using-object-tracking-multithreaded.py
new file mode 100644
index 00000000..3ec4bddf
--- /dev/null
+++ b/Computer Vision/Gesture-Controlled-Snake-Game/game-control-using-object-tracking-multithreaded.py
@@ -0,0 +1,204 @@
+'''This script can detect objects specified by the HSV color and also sense the
+direction of their movement.Using this script a Snake Game which has been loaed
+in the repo, can be played. Implemented using OpenCV.
+Uses seperate thread for reading frames through OpenCV.'''
+
+#Import necessary modules
+import cv2
+import imutils
+import numpy as np
+from collections import deque
+import time
+import pyautogui
+from threading import Thread
+
+#Class implemeting seperate threading for reading of frames.
+class WebcamVideoStream:
+ def __init__(self):
+ self.stream = cv2.VideoCapture(0)
+ self.ret, self.frame = self.stream.read()
+ self.stopped = False
+ def start(self):
+ Thread(target = self.update, args=()).start()
+ return self
+ def update(self):
+ while True:
+ if self.stopped:
+ return
+ self.ret, self.frame = self.stream.read()
+ def read(self):
+ return self.frame
+ def stop(self):
+ self.stopped = True
+
+"""class VideoShow:
+ def __init__(self, frame = None):
+ self.frame = frame
+ self.stopped = False
+ def start(self):
+ while not self.stopped:
+ cv2.imshow('Game Control Window', self.frame)
+ if(cv2.waitKey(1) == ord('q')):
+ self.stopped = True
+ def stop(self):
+ self.stopped = True
+"""
+#Define HSV colour range for green colour objects
+greenLower = (29, 86, 6)
+greenUpper = (64, 255, 255)
+
+#Used in deque structure to store no. of given buffer points
+buffer = 20
+
+#Used so that pyautogui doesn't click the center of the screen at every frame
+flag = 0
+
+#Points deque structure storing 'buffer' no. of object coordinates
+pts = deque(maxlen = buffer)
+#Counts the minimum no. of frames to be detected where direction change occurs
+counter = 0
+#Change in direction is stored in dX, dY
+(dX, dY) = (0, 0)
+#Variable to store direction string
+direction = ''
+#Last pressed variable to detect which key was pressed by pyautogui
+last_pressed = ''
+
+#Sleep for 2 seconds to let camera initialize properly.
+time.sleep(2)
+
+#Use pyautogui function to detect width and height of the screen
+width,height = pyautogui.size()
+
+#Start video capture in a seperate thread from main thread.
+vs = WebcamVideoStream().start()
+#video_shower = VideoShow(vs.read()).start()
+
+#Click on the centre of the screen, game window should be placed here.
+pyautogui.click(int(width/2), int(height/2))
+
+while True:
+
+ '''game_window = pyautogui.locateOnScreen(r'images\SnakeGameWelcomeScreen.png')
+ game_window_center = pyautogui.center(game_window)
+ pyautogui.click(game_window_center)'''
+
+ #Store the readed frame in frame
+ frame = vs.read()
+ #Flip the frame to avoid mirroring effect
+ frame = cv2.flip(frame,1)
+ #Resize the given frame to a 600*600 window
+ frame = imutils.resize(frame, width = 600)
+ #Blur the frame using Gaussian Filter of kernel size 11, to remove excessivve noise
+ blurred_frame = cv2.GaussianBlur(frame, (5,5), 0)
+ #Convert the frame to HSV, as HSV allow better segmentation.
+ hsv_converted_frame = cv2.cvtColor(blurred_frame, cv2.COLOR_BGR2HSV)
+
+ #Create a mask for the frame, showing green values
+ mask = cv2.inRange(hsv_converted_frame, greenLower, greenUpper)
+ #Erode the masked output to delete small white dots present in the masked image
+ mask = cv2.erode(mask, None, iterations = 2)
+ #Dilate the resultant image to restore our target
+ mask = cv2.dilate(mask, None, iterations = 2)
+
+ #Find all contours in the masked image
+ _,cnts,_ = cv2.findContours(mask.copy(), cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE)
+
+ #Define center of the ball to be detected as None
+ center = None
+
+ #If any object is detected, then only proceed
+ if(len(cnts) > 0):
+ #Find the contour with maximum area
+ c = max(cnts, key = cv2.contourArea)
+ #Find the center of the circle, and its radius of the largest detected contour.
+ ((x, y), radius) = cv2.minEnclosingCircle(c)
+ #Calculate the centroid of the ball, as we need to draw a circle around it.
+ M = cv2.moments(c)
+ center = (int(M['m10'] / M['m00']), int(M['m01'] / M['m00']))
+
+ #Proceed only if a ball of considerable size is detected
+ if radius > 10:
+ #Draw circles around the object as well as its centre
+ cv2.circle(frame, (int(x), int(y)), int(radius), (0,255,255), 2)
+ cv2.circle(frame, center, 5, (0,255,255), -1)
+ #Append the detected object in the frame to pts deque structure
+ pts.appendleft(center)
+
+ #Using numpy arange function for better performance. Loop till all detected points
+ for i in np.arange(1, len(pts)):
+ #If no points are detected, move on.
+ if(pts[i-1] == None or pts[i] == None):
+ continue
+
+ #If atleast 10 frames have direction change, proceed
+ if counter >= 10 and i == 1 and pts[-10] is not None:
+ #Calculate the distance between the current frame and 10th frame before
+ dX = pts[-10][0] - pts[i][0]
+ dY = pts[-10][1] - pts[i][1]
+ (dirX, dirY) = ('', '')
+
+ #If distance is greater than 50 pixels, considerable direction change has occured.
+ if np.abs(dX) > 50:
+ dirX = 'West' if np.sign(dX) == 1 else 'East'
+
+ if np.abs(dY) > 50:
+ dirY = 'North' if np.sign(dY) == 1 else 'South'
+
+ #Set direction variable to the detected direction
+ direction = dirX if dirX != '' else dirY
+ #Write the detected direction on the frame.
+ cv2.putText(frame, direction, (20,40), cv2.FONT_HERSHEY_SIMPLEX, 1, (0,0,255), 3)
+
+ #Draw a trailing red line to depict motion of the object.
+ thickness = int(np.sqrt(buffer / float(i + 1)) * 2.5)
+ cv2.line(frame, pts[i - 1], pts[i], (0, 0, 255), thickness)
+
+ #If deteced direction is East, press right button
+ if direction == 'East':
+ if last_pressed != 'right':
+ pyautogui.press('right')
+ last_pressed = 'right'
+ print("Right Pressed")
+ #pyautogui.PAUSE = 2
+ #If deteced direction is West, press Left button
+ elif direction == 'West':
+ if last_pressed != 'left':
+ pyautogui.press('left')
+ last_pressed = 'left'
+ print("Left Pressed")
+ #pyautogui.PAUSE = 2
+ #if detected direction is North, press Up key
+ elif direction == 'North':
+ if last_pressed != 'up':
+ last_pressed = 'up'
+ pyautogui.press('up')
+ print("Up Pressed")
+ #pyautogui.PAUSE = 2
+ #If detected direction is South, press down key
+ elif direction == 'South':
+ if last_pressed != 'down':
+ pyautogui.press('down')
+ last_pressed = 'down'
+ print("Down Pressed")
+ #pyautogui.PAUSE = 2
+
+
+ #video_shower.frame = frame
+ #Show the output frame.
+ cv2.imshow('Game Control Window', frame)
+ key = cv2.waitKey(1) & 0xFF
+ #Update counter as the direction change has been detected.
+ counter += 1
+
+ #If pyautogui has not clicked on center, click it once to focus on game window.
+ if (flag == 0):
+ pyautogui.click(int(width/2), int(height/2))
+ flag = 1
+
+ #If q is pressed, close the window
+ if(key == ord('q')):
+ break
+#After all the processing, release webcam and destroy all windows
+vs.stop()
+cv2.destroyAllWindows()
diff --git a/Computer Vision/Gesture-Controlled-Snake-Game/game-control-using-object-tracking.py b/Computer Vision/Gesture-Controlled-Snake-Game/game-control-using-object-tracking.py
new file mode 100644
index 00000000..7d67b852
--- /dev/null
+++ b/Computer Vision/Gesture-Controlled-Snake-Game/game-control-using-object-tracking.py
@@ -0,0 +1,171 @@
+'''This script can detect objects specified by the HSV color and also sense the
+direction of their movement.Using this script a Snake Game which has been loaed
+in the repo, can be played. Implemented using OpenCV.'''
+
+#Import necessary modules
+import cv2
+import imutils
+import numpy as np
+from collections import deque
+import time
+import pyautogui
+
+#Define HSV colour range for green colour objects
+greenLower = (29, 86, 6)
+greenUpper = (64, 255, 255)
+
+#Used in deque structure to store no. of given buffer points
+buffer = 20
+
+#Used so that pyautogui doesn't click the center of the screen at every frame
+flag = 0
+
+#Points deque structure storing 'buffer' no. of object coordinates
+pts = deque(maxlen = buffer)
+#Counts the minimum no. of frames to be detected where direction change occurs
+counter = 0
+#Change in direction is stored in dX, dY
+(dX, dY) = (0, 0)
+#Variable to store direction string
+direction = ''
+#Last pressed variable to detect which key was pressed by pyautogui
+last_pressed = ''
+
+#Start video capture
+video_capture = cv2.VideoCapture(0)
+
+#Sleep for 2 seconds to let camera initialize properly.
+time.sleep(2)
+
+#Use pyautogui function to detect width and height of the screen
+width,height = pyautogui.size()
+
+
+#Loop until OpenCV window is not closed
+while True:
+
+ '''game_window = pyautogui.locateOnScreen(r'images\SnakeGameWelcomeScreen.png')
+ game_window_center = pyautogui.center(game_window)
+ pyautogui.click(game_window_center)'''
+
+
+ #Store the readed frame in frame, ret defines return value
+ ret, frame = video_capture.read()
+ #Flip the frame to avoid mirroring effect
+ frame = cv2.flip(frame,1)
+ #Resize the given frame to a 600*600 window
+ frame = imutils.resize(frame, width = 600)
+ #Blur the frame using Gaussian Filter of kernel size 11, to remove excessivve noise
+ blurred_frame = cv2.GaussianBlur(frame, (11,11), 0)
+ #Convert the frame to HSV, as HSV allow better segmentation.
+ hsv_converted_frame = cv2.cvtColor(blurred_frame, cv2.COLOR_BGR2HSV)
+
+ #Create a mask for the frame, showing green values
+ mask = cv2.inRange(hsv_converted_frame, greenLower, greenUpper)
+ #Erode the masked output to delete small white dots present in the masked image
+ mask = cv2.erode(mask, None, iterations = 2)
+ #Dilate the resultant image to restore our target
+ mask = cv2.dilate(mask, None, iterations = 2)
+
+ #Find all contours in the masked image
+ _,cnts,_ = cv2.findContours(mask.copy(), cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE)
+
+ #Define center of the ball to be detected as None
+ center = None
+
+ #If any object is detected, then only proceed
+ if(len(cnts) > 0):
+ #Find the contour with maximum area
+ c = max(cnts, key = cv2.contourArea)
+ #Find the center of the circle, and its radius of the largest detected contour.
+ ((x, y), radius) = cv2.minEnclosingCircle(c)
+ #Calculate the centroid of the ball, as we need to draw a circle around it.
+ M = cv2.moments(c)
+ center = (int(M['m10'] / M['m00']), int(M['m01'] / M['m00']))
+
+ #Proceed only if a ball of considerable size is detected
+ if radius > 10:
+ #Draw circles around the object as well as its centre
+ cv2.circle(frame, (int(x), int(y)), int(radius), (0,255,255), 2)
+ cv2.circle(frame, center, 5, (0,255,255), -1)
+ #Append the detected object in the frame to pts deque structure
+ pts.appendleft(center)
+
+ #Using numpy arange function for better performance. Loop till all detected points
+ for i in np.arange(1, len(pts)):
+ #If no points are detected, move on.
+ if(pts[i-1] == None or pts[i] == None):
+ continue
+
+ #If atleast 10 frames have direction change, proceed
+ if counter >= 10 and i == 1 and pts[-10] is not None:
+ #Calculate the distance between the current frame and 10th frame before
+ dX = pts[-10][0] - pts[i][0]
+ dY = pts[-10][1] - pts[i][1]
+ (dirX, dirY) = ('', '')
+
+ #If distance is greater than 50 pixels, considerable direction change has occured.
+ if np.abs(dX) > 50:
+ dirX = 'West' if np.sign(dX) == 1 else 'East'
+
+ if np.abs(dY) > 50:
+ dirY = 'North' if np.sign(dY) == 1 else 'South'
+
+ #Set direction variable to the detected direction
+ direction = dirX if dirX != '' else dirY
+
+ #Draw a trailing red line to depict motion of the object.
+ thickness = int(np.sqrt(buffer / float(i + 1)) * 2.5)
+ cv2.line(frame, pts[i - 1], pts[i], (0, 0, 255), thickness)
+
+ #If deteced direction is East, press right button
+ if direction == 'East':
+ if last_pressed != 'right':
+ pyautogui.press('right')
+ last_pressed = 'right'
+ print("Right Pressed")
+ #pyautogui.PAUSE = 2
+ #If deteced direction is West, press Left button
+ elif direction == 'West':
+ if last_pressed != 'left':
+ pyautogui.press('left')
+ last_pressed = 'left'
+ print("Left Pressed")
+ #pyautogui.PAUSE = 2
+ #if detected direction is North, press Up key
+ elif direction == 'North':
+ if last_pressed != 'up':
+ last_pressed = 'up'
+ pyautogui.press('up')
+ print("Up Pressed")
+ #pyautogui.PAUSE = 2
+ #If detected direction is South, press down key
+ elif direction == 'South':
+ if last_pressed != 'down':
+ pyautogui.press('down')
+ last_pressed = 'down'
+ print("Down Pressed")
+ #pyautogui.PAUSE = 2
+
+
+ #Write the detected direction on the frame.
+ cv2.putText(frame, direction, (20,40), cv2.FONT_HERSHEY_SIMPLEX, 1, (0,0,255), 3)
+
+ #Show the output frame.
+ cv2.imshow('Game Control Window', frame)
+ key = cv2.waitKey(1) & 0xFF
+ #Update counter as the direction change has been detected.
+ counter += 1
+
+ #If pyautogui has not clicked on center, click it once to focus on game window.
+ if (flag == 0):
+ #Click on the centre of the screen, game window should be placed here.
+ pyautogui.click(int(width/2), int(height/2))
+ flag = 1
+
+ #If q is pressed, close the window
+ if(key == ord('q')):
+ break
+#After all the processing, release webcam and destroy all windows
+video_capture.release()
+cv2.destroyAllWindows()
diff --git a/Computer Vision/Gesture-Controlled-Snake-Game/images/Game-Play.png b/Computer Vision/Gesture-Controlled-Snake-Game/images/Game-Play.png
new file mode 100644
index 00000000..6c03b69e
Binary files /dev/null and b/Computer Vision/Gesture-Controlled-Snake-Game/images/Game-Play.png differ
diff --git a/Computer Vision/Gesture-Controlled-Snake-Game/images/SnakeGameWelcomeScreen.png b/Computer Vision/Gesture-Controlled-Snake-Game/images/SnakeGameWelcomeScreen.png
new file mode 100644
index 00000000..b0c55a98
Binary files /dev/null and b/Computer Vision/Gesture-Controlled-Snake-Game/images/SnakeGameWelcomeScreen.png differ
diff --git a/Computer Vision/Gesture-Controlled-Snake-Game/object-detection.py b/Computer Vision/Gesture-Controlled-Snake-Game/object-detection.py
new file mode 100644
index 00000000..6b6167f4
--- /dev/null
+++ b/Computer Vision/Gesture-Controlled-Snake-Game/object-detection.py
@@ -0,0 +1,94 @@
+''' This script detects a object of specified object colour from the webcam video feed.
+Using OpenCV library for vision tasks and HSV color space for detecting object of given specific color.'''
+
+#Import necessary modules
+import cv2
+import imutils
+import numpy as np
+from collections import deque
+import time
+import math
+
+#Define HSV colour range for green colour objects
+greenLower = (29, 86, 6)
+greenUpper = (64, 255, 255)
+
+#Used in deque structure to store no. of given buffer points
+buffer = 20
+
+#Points deque structure storing 'buffer' no. of object coordinates
+pts = deque(maxlen = buffer)
+
+#Start video capture
+video_capture = cv2.VideoCapture(0)
+
+#Sleep for 2 seconds to let camera initialize properly.
+time.sleep(2)
+
+#Loop until OpenCV window is not closed
+while True:
+ #Store the readed frame in frame, ret defines return value
+ ret, frame = video_capture.read()
+ #Flip the frame to avoid mirroring effect
+ frame = cv2.flip(frame,1)
+ #Resize the given frame to a 600*600 window
+ frame = imutils.resize(frame, width = 600)
+ #Blur the frame using Gaussian Filter of kernel size 5, to remove excessivve noise
+ blurred_frame = cv2.GaussianBlur(frame, (5,5), 0)
+ #Convert the frame to HSV, as HSV allow better segmentation.
+ hsv_converted_frame = cv2.cvtColor(blurred_frame, cv2.COLOR_BGR2HSV)
+
+ #Create a mask for the frame, showing green values
+ mask = cv2.inRange(hsv_converted_frame, greenLower, greenUpper)
+ #Erode the masked output to delete small white dots present in the masked image
+ mask = cv2.erode(mask, None, iterations = 2)
+ #Dilate the resultant image to restore our target
+ mask = cv2.dilate(mask, None, iterations = 2)
+
+ #Display the masked output in a different window
+ cv2.imshow('Masked Output', mask)
+
+ #Find all contours in the masked image
+ _,cnts,_ = cv2.findContours(mask.copy(), cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE)
+
+ #Define center of the ball to be detected as None
+ center = None
+
+ #If any object is detected, then only proceed
+ if(len(cnts)) > 0:
+ #Find the contour with maximum area
+ c = max(cnts, key = cv2.contourArea)
+ #Find the center of the circle, and its radius of the largest detected contour.
+ ((x,y), radius) = cv2.minEnclosingCircle(c)
+
+ #Calculate the centroid of the ball, as we need to draw a circle around it.
+ M = cv2.moments(c)
+ center = (int(M['m10'] / M['m00']), int(M['m01'] / M['m00']))
+
+ #Proceed only if a ball of considerable size is detected
+ if radius > 10:
+ #Draw circles around the object as well as its centre
+ cv2.circle(frame, (int(x), int(y)), int(radius), (0,255,255), 2)
+ cv2.circle(frame, center, 5, (0,255,255), -1)
+
+ #Append the detected object in the frame to pts deque structure
+ pts.appendleft(center)
+
+ #This function makes the trailing line behind the detected object
+ for i in range(1, len(pts)):
+ if pts[i-1] is None or pts[i] is None:
+ continue
+
+ thickness = int(np.sqrt(buffer / float(i + 1)) * 2.5)
+ cv2.line(frame, pts[i - 1], pts[i], (0, 0, 255), thickness)
+
+ #Show the output frame
+ cv2.imshow('Frame', frame)
+ key = cv2.waitKey(1) & 0xFF
+
+ #If q is pressed, close the window
+ if(key == ord('q')):
+ break
+#After all the processing, release webcam and destroy all windows
+video_capture.release()
+cv2.destroyAllWindows()
diff --git a/Computer Vision/Gesture-Controlled-Snake-Game/object-tracking-direction-detection.py b/Computer Vision/Gesture-Controlled-Snake-Game/object-tracking-direction-detection.py
new file mode 100644
index 00000000..24ecbf38
--- /dev/null
+++ b/Computer Vision/Gesture-Controlled-Snake-Game/object-tracking-direction-detection.py
@@ -0,0 +1,118 @@
+'''This script can detect objects specified by the HSV color and also sense the
+direction of their movement. Implemented using OpenCV.'''
+
+#Import necessary modules
+import cv2
+import imutils
+import numpy as np
+from collections import deque
+import time
+
+#Define HSV colour range for green colour objects
+greenLower = (29, 86, 6)
+greenUpper = (64, 255, 255)
+
+#Used in deque structure to store no. of given buffer points
+buffer = 20
+
+#Points deque structure storing 'buffer' no. of object coordinates
+pts = deque(maxlen = buffer)
+#Counts the minimum no. of frames to be detected where direction change occurs
+counter = 0
+#Change in direction is stored in dX, dY
+(dX, dY) = (0, 0)
+#Variable to store direction string
+direction = ''
+
+#Start video capture
+video_capture = cv2.VideoCapture(0)
+
+#Sleep for 2 seconds to let camera initialize properly.
+time.sleep(2)
+
+#Loop until OpenCV window is not closed
+while True:
+ #Store the readed frame in frame, ret defines return value
+ ret, frame = video_capture.read()
+ #Flip the frame to avoid mirroring effect
+ frame = cv2.flip(frame,1)
+ #Resize the given frame to a 600*600 window
+ frame = imutils.resize(frame, width = 600)
+ #Blur the frame using Gaussian Filter of kernel size 5, to remove excessivve noise
+ blurred_frame = cv2.GaussianBlur(frame, (5,5), 0)
+ #Convert the frame to HSV, as HSV allow better segmentation.
+ hsv_converted_frame = cv2.cvtColor(blurred_frame, cv2.COLOR_BGR2HSV)
+
+ #Create a mask for the frame, showing green values
+ mask = cv2.inRange(hsv_converted_frame, greenLower, greenUpper)
+ #Erode the masked output to delete small white dots present in the masked image
+ mask = cv2.erode(mask, None, iterations = 2)
+ #Dilate the resultant image to restore our target
+ mask = cv2.dilate(mask, None, iterations = 2)
+
+ #Find all contours in the masked image
+ _,cnts,_ = cv2.findContours(mask.copy(), cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE)
+
+ #Define center of the ball to be detected as None
+ center = None
+
+ #If any object is detected, then only proceed
+ if(len(cnts) > 0):
+ #Find the contour with maximum area
+ c = max(cnts, key = cv2.contourArea)
+ #Find the center of the circle, and its radius of the largest detected contour.
+ ((x, y), radius) = cv2.minEnclosingCircle(c)
+ #Calculate the centroid of the ball, as we need to draw a circle around it.
+ M = cv2.moments(c)
+ center = (int(M['m10'] / M['m00']), int(M['m01'] / M['m00']))
+
+ #Proceed only if a ball of considerable size is detected
+ if radius > 10:
+ #Draw circles around the object as well as its centre
+ cv2.circle(frame, (int(x), int(y)), int(radius), (0,255,255), 2)
+ cv2.circle(frame, center, 5, (0,255,255), -1)
+ #Append the detected object in the frame to pts deque structure
+ pts.appendleft(center)
+
+ #Using numpy arange function for better performance. Loop till all detected points
+ for i in np.arange(1, len(pts)):
+ #If no points are detected, move on.
+ if(pts[i-1] == None or pts[i] == None):
+ continue
+
+ #If atleast 10 frames have direction change, proceed
+ if counter >= 10 and i == 1 and pts[-10] is not None:
+ #Calculate the distance between the current frame and 10th frame before
+ dX = pts[-10][0] - pts[i][0]
+ dY = pts[-10][1] - pts[i][1]
+ (dirX, dirY) = ('', '')
+
+ #If distance is greater than 100 pixels, considerable direction change has occured.
+ if np.abs(dX) > 100:
+ dirX = 'West' if np.sign(dX) == 1 else 'East'
+
+ if np.abs(dY) > 100:
+ dirY = 'North' if np.sign(dY) == 1 else 'South'
+
+ #Set direction variable to the detected direction
+ direction = dirX if dirX != '' else dirY
+
+ #Draw a trailing red line to depict motion of the object.
+ thickness = int(np.sqrt(buffer / float(i + 1)) * 2.5)
+ cv2.line(frame, pts[i - 1], pts[i], (0, 0, 255), thickness)
+
+ #Write the detected direction on the frame.
+ cv2.putText(frame, direction, (20,40), cv2.FONT_HERSHEY_SIMPLEX, 1, (0,0,255), 3)
+
+ #Show the output frame.
+ cv2.imshow('Window- Direction Detection', frame)
+ key = cv2.waitKey(1) & 0xFF
+ #Update counter as the direction change has been detected.
+ counter += 1
+
+ #If q is pressed, close the window
+ if(key == ord('q')):
+ break
+#After all the processing, release webcam and destroy all windows
+video_capture.release()
+cv2.destroyAllWindows()
diff --git a/Computer Vision/Gesture-Controlled-Snake-Game/requirements.txt b/Computer Vision/Gesture-Controlled-Snake-Game/requirements.txt
new file mode 100644
index 00000000..95719280
--- /dev/null
+++ b/Computer Vision/Gesture-Controlled-Snake-Game/requirements.txt
@@ -0,0 +1,5 @@
+numpy==1.15.2
+imutils==0.5.1
+PyAutoGUI==0.9.38
+opencv_python==3.4.3.18
+pygame==1.9.4
diff --git a/Computer Vision/Gesture-Controlled-Snake-Game/settingsSnakeFun.py b/Computer Vision/Gesture-Controlled-Snake-Game/settingsSnakeFun.py
new file mode 100644
index 00000000..4cfdb18e
--- /dev/null
+++ b/Computer Vision/Gesture-Controlled-Snake-Game/settingsSnakeFun.py
@@ -0,0 +1,48 @@
+import pygame
+from pygame.locals import *
+
+pygame.init()
+pygame.mixer.init()
+pygame.font.init()
+
+FPS = 3
+WINDOWWIDTH = 640
+WINDOWHEIGHT = 480
+CELLSIZE = 20
+assert WINDOWHEIGHT % CELLSIZE == 0, "Window Height must be a multiple of Cell Size"
+assert WINDOWWIDTH % CELLSIZE == 0, "Window Width must be a multiple of Cell Size"
+CELLWIDTH = int(WINDOWWIDTH / CELLSIZE)
+CELLHEIGHT = int(WINDOWHEIGHT / CELLSIZE)
+
+#Colour Codes
+# R G B
+WHITE = (255, 255, 255)
+BLACK = (0, 0, 0)
+RED = (255, 0, 0)
+GREEN = (0, 255, 0)
+DARKGREEN= (0, 155, 0)
+DARKGRAY = (40, 40, 40)
+YELLOW = (255, 255, 0)
+
+BGCOLOR = BLACK
+
+#Control Keys
+UP = 'up'
+DOWN = 'down'
+LEFT = 'left'
+RIGHT = 'right'
+
+HEAD = 0 #Index of the snake's head
+
+#Game Sounds
+APPLEEATSOUND = pygame.mixer.Sound(r"sounds/appleEatSound.wav")
+BGMUSIC = pygame.mixer.music.load(r"sounds/bgmusic.mid")
+
+def levelSelect():
+ global FPS
+ if level == "EASY":
+ FPS = 4
+ elif level == "MEDIUM":
+ FPS = 7
+ elif level == "HARD":
+ FPS = 10
diff --git a/Computer Vision/Gesture-Controlled-Snake-Game/sounds/appleEatSound.wav b/Computer Vision/Gesture-Controlled-Snake-Game/sounds/appleEatSound.wav
new file mode 100644
index 00000000..f42ea443
Binary files /dev/null and b/Computer Vision/Gesture-Controlled-Snake-Game/sounds/appleEatSound.wav differ
diff --git a/Computer Vision/Gesture-Controlled-Snake-Game/sounds/bgmusic.mid b/Computer Vision/Gesture-Controlled-Snake-Game/sounds/bgmusic.mid
new file mode 100644
index 00000000..04a78d93
Binary files /dev/null and b/Computer Vision/Gesture-Controlled-Snake-Game/sounds/bgmusic.mid differ