For my master's project, I developed a stock prediction model based on 2020's financial data and news articles, in order to analyze the effects of financial information and news data on the predictability of future stock prices during the global COVID-19 pandemic. For financial information, daily closing price, high price, low price and market volume for stocks are considered. Daily stock news is retrieved and processed to determine a quantitative sentiment score. In addition, daily top headlines are processed for a separate COVID-19 sentiment score which is also input into the prediction model. A lexical analysis approach is used to establish a sentiment score for both stock news and COVID-19 news. The broader goal for this project and related future works is to develop and establish a financial market prediction model that can be utilized during future global pandemics to minimize the loss of financial capital.
For my Artificial Intelligence class' final group project, I was in charge of developing a genetic algorithm approach to a spin-off of the traditional Snake game. An implementation of the game with 2 snakes (prey vs. predator) is used for the project. The goal of the prey is to collect as many fruits on the board as possible without running into itself or the predator, while the goal of the predator is to hunt the prey snake without running into itself. The heuristic functions of both snakes are updated in generations, depending on the success/failure of the previous runs.
For my Computer Modeling and Simulation class' final group project, I developed a simulation model to predict the generation and spread of forest fires. The goal of the project was to discern the impact of different leaf raking frequencies on the overall lifespan of a forest. The simulation, programmed in C++, follows a tick-based model, where each tick represents a single day. The simulation model uses a single clock to check and update the different parameters affecting forest fires, leaf growth and soil nutrient values.
As the final project for my Big Data class, I developed a Word Similarity Search model using MapReduce algorithm. The purpose of the project was to establish relationships between words in a collection of text documents. Apache Spark is used as the primary data processing engine. Mainly, Spark’s map & reduce features are used to run MapReduce algorithms on the input file to determine most similar words to a given query word. Python is used as the programming language, and pyspark is used as the API between Python and Spark.
Machine learning was definitely daunting to me at first. But developing a machine learning based image recognition program was both incredibly rewarding and exciting (and at times frustrating). The neural network architecture learns pattern recognition problems via forward and backward propagation of training data.
Why write everything down in a worn notebook if you can write a computer program to keep track of all your upcoming tasks? I developed a program that does exactly that.
Everyone loves Sudoku. I love it so much I went on to write a program that solves them.
Do you ever sit down to write something and think to yourself, "man, I wish I knew what word can I use next..". I wrote a program that provides suggestions for next words using existing data from approximately 100 books.
Copyright © 2022 Nablul - All Rights Reserved.