A novel discrete time general Markov chain SEIRS epidemic model with vaccination is derived and studied. The model incorporates finite delay times for disease incubation, natural and artificial immunity periods, and the period of infectiousness of infected individuals. The novel platform for representing the different states of the disease in the population utilizes two discrete time measures for the current time of a personâ€™s state, and how long a person has been in the current state. Two sub-models are derived based on whether the drive to get vaccinated is inspired by close contacts with infectious individuals or otherwise. Sensitivity analysis is conducted on the two models to determine how vaccination affects disease eradication.

]]>According to the WHO, globally, as of March 5, 2021, Coronavirus Disease 2019 (COVID-19) has infected over 115 million people, caused over 2.5 million deaths and widespread economic downturn. In this talk, we present our modeling, numerical simulation, and analysis of the stochastic dynamics of COVID-19 in a closed population that is considered the starting point of the outbreak of the disease. We present two COVID-19 epidemic models for the population, where at any given time an individual can be in any of the following categories: susceptible, exposed and mildly infectious, asymptomatic and infectious, symptomatic and infectious, symptomatic hospitalized and infectious, recovered with partial immunity, or deceased from disease related causes. Both models are discrete-time Markov chain (DTMC) models with multinomial transition probabilities. Using CDC data on daily infection rate for the state of Georgia, we attempt to fit the model to data using multiple linear regression and conduct sensitivity analysis on a selected stochastic model to determine the effects of varying disease parameters on the dynamics of the epidemic. The results from this work highlight the importance of applying statistical and stochastic methods to understand and control COVID-19 dynamics in a population.

]]>In a Markov Decision Process, an agent must learn to choose actions in order to optimally navigate a Markovian environment. When the system dynamics are unknown and the agent's behavior is learned from data, the problem is known as Reinforcement Learning. In theory, for the learned behavior to converge to the optimal behavior, data must be collected from every state-action combination infinitely often. Therefore in practice, the methodology the agent uses to explore the environment is critical to learning approximately optimal behavior from a reasonable amount of data. This paper discusses the benefits of augmenting existing exploration strategies by choosing from actions in a low-discrepancy manner. When the state and action spaces are discrete, actions are selected uniformly from those who have been tried the least number of times. When the state and action spaces are continuous, quasi-random sequences are used to select actions. The superiority of this strategy over purely random action selection is demonstrated by proof for a simple discrete MDP, and empirically for more complex processes

]]>In many statistical and machine learning applications, without-replacement sampling is considered superior to with-replacement sampling. In some cases, this has been proven, and in others the heuristic is so intuitively attractive that it is taken for granted. In reinforcement learning, many count-based exploration strategies are justified by reliance on the aforementioned heuristic. This paper will detail the non-intuitive discovery that when measuring the goodness of an exploration strategy by the stochastic shortest path to a goal state, there is a class of processes for which an action selection strategy based on without-replacement sampling of actions can be worse than with replacement sampling. Specifically, the expected time until a specified goal state is first reached can be provably larger under without-replacement sampling. Numerical experiments describe the frequency and severity of this inferiority

]]>Conference was originally scheduled for May 2020 but was rescheduled to May 2021 due to the covid-19 pandemic.

]]>