AUTHORS: OSCAR CHANG AND LUIS ZHININ-VERA
May 10th, 2020.
This project presents a biological inspired robot capable of learn by itself high level Tic-Tac-Toe playing policies and then use this knowledge to advantageously compete with humans. The robot comprises a robotic arm, an artificial vision system and a self-motivated neural agent which has the capability to explore in a simulated ambient, new forms of game episodes that conduce toward bigger rewards. During the training phase we propose a three terms reinforcement learning scheme where the agent memory resources are sustained by adviser neural sub-networks, noise-balanced trained as to satisfy the look for future conditions in the control optimization predicted by the Bellman equation. In the operating phase the components merge into a wised up robot, with look ahead capacities, that mimic the abilities of ingenious human players. The achieved look ahead robotic intelligence could be useful in other complex robotic mechanisms.
We describe a method to design a robot driven by a self-taught, reinforcement learning neural agent that develops the capacity to play Tic-Tac-Toe games with a far look ahead capacity. The robot comprises an artificial vision platform that feeds sparse data to and self-taught neural agent which explores new games situation in a virtual world. Through reinforcement learning, neural models and gradient descend algorithms the agent learns by itself high level game strategies. The acquired knowledge is later used to play clever matches in a physical environment, where the robot watches the game board and uses its robotic arm to select game moves. The obtained look ahead intelligence could be useful in other robotic processes where critical visual decisions has to be taken such as autonomous surgeon, vehicle driving robots and information security.
The general robot layout.
Self-motivated neural agent.
The learning networks.
Adviser Neural Agent.
REFERENCES: Anjomshoae, S., Najjar, A., Calvaresi, D., Fr¨amling, K.: Explainable agents and robots: Results from a systematic literature review (06 2019).  Brembs, B.: Genetic analysis of behavior in drosophila. In: The Oxford Handbook of Invertebrate Neurobiology, p. 171. Oxford University Press (2019).  B¨osser, T.: Autonomous agents. In: Wright, J.D. (ed.) International Encyclopedia of the Social Behavioral Sciences (Second Edition), pp. 309 – 313. Elsevier, Oxford, second edition edn. (2015). https://doi.org/https://doi.org/10.1016/B978-0-08-097086-8.43011-4 http://www.sciencedirect.com/science/article/pii/B9780080970868430114  Canaan, R., Salge, C., Togelius, J., Nealen, A.: Leveling the playing field – fairness in ai versus human game benchmarks (03 2019)  Chang, O.: Autonomous robots and behavior initiators. In: Anbarjafari, G., Escalera, S. (eds.) Human-Robot Interaction, chap. 7. IntechOpen, Rijeka (2018). https://doi.org/10.5772/intechopen.71958  Chang, O.: Self-programming robots boosted by neural agents. In: Wang, S., Yamamoto, V., Su, J., Yang, Y., Jones, E., Iasemidis, L., Mitchell, T. (eds.) Brain Informatics. pp. 448–457. Springer International Publishing, Cham (2018)  Crowley, K., Siegler, R.S.: Flexible strategy use in young children’s tic-tactoe. Cognitive Science 17, 531–561 (1993)  Datta, S., Barua, R., Das, J.: Application of artificial intelligence in modern healthcare system. In: Pereira, L. (ed.) Alginates, chap. 8. IntechOpen, Rijeka (2020). https://doi.org/10.5772/intechopen.90454  Do, N.: Norman do how to win at tic-tac-toe (2005)
Fig.1 The protein folding robotic agent. With 30 links and 15 different amino acids, motivated by received rewards and its capacity to look into the future, the agent rapidly learns by itself to bring together black dots and separate white ones, in a quasi-optimal array.
AUTHORS: OSCAR CHANG, LUIS PLAZA, ANTONIO DIAZ, FERNANDO GONZALES, ERICK CUENCA AND LUIS ZHININ-VERA
May 26th, 2020.
Virus infection is of great concern because a massive spread contagion can cause big troubles to the well-functioning of the society, as painfully proven by the brutal blow of Covid-19 It is known that when the influenza virus infects cells  it recognizes cell-surface receptors as to infects the «right» cell types and begin the fusing of small vesicles. It is not exactly known how the key protein hemagglutinin esterase (HEs) interacts with cell membranes to refolds and alters it, promoting viral entry. HEs is thus of vital importance in SARS-CoV-2 Covid-19 control because it is the turn-on key that the coronavirus family uses to penetrate cells and cause pandemics.
In previous research we proposed and develop computer simulation programs in C++, based on Artificial Intelligence, for a self-motivated, multi-joint virtual robot driven by an agent that learns by itself to control the robot’s body and execute complex folding motions that produce intelligent mass displacements in multifactorial environments . The robot’s mechanical joints, muscles and sensors are controlled by trainable artificial neurons and its abilities to execute challenging mechanical work are acquired by a self-taught agent, a computer program capable of exploring and learn new solutions by itself through a combination of artificial neural nets, gradient descent and reinforcement learning algorithms.
Reinforcement learning is essentially the computer numerical solution of the Bellman equation , an applied mathematical concept that guarantees maximal obtained value in the control of a sequence of events that occur in a complex environment, with rewards scattered in the space time. In this sense a Bellman agent must always look into the future during its learning journey. By extending these techniques, we also have constructed a wise up visual robot driven by a self-taught agent that learn by itself to play high level tic tac toe, and outsmart human contenders in physical board games .
In response to the Covid-19 pandemia we recently adapted our robotic agents to realize basic protein folding tasks, obtaining promising results. The conversion was readily achieved by converting each link in the robot in an amino acid and the local link’s sensors in molecular dynamic sensors, perceiving nearby fields, ligand-bindings, polarities etc, all the parameters required for proper protein folding. As a simple example consider the events in fig 1. Here the robotic agent, with 30 links and 15 different amino acids with different ligand-binding characteristics, receive rewards if it bending states bring together black dots and separate white ones. In the sequence, thanks to its look into the future capacity, the agent rapidly learns to find good folding solutions, among the quasi infinite set of bending states.
To study and simulate protein folding structures such as the HEs and learn to counteract important aspects of virus infection by using the creative force of protein folding robotic agents. For instance how the HEs maliciously interacts with cell membranes and promotes cell contagion. In principle our robotic agents, after learning to imitate the HEs assembly, could construct by themselves new intelligent proteins that target specifics parts of the corona viruses and act against infection. To extend the software functioning to the supercomputer “Quinde” operating in the yachay tech campus and to the parallel computing ambient of CUDA-GFORCE.
Develop new Artificial Intelligence based tools to study protein folding and apply this knowledge to the control of viruses’ infections in Ecuador and the world.
Measure how quickly a protein can mutate and refold to cause new infections.
Discover new “intelligent” proteins for virus control
To serve as a resource for future research into virus control
Develop a research multidisciplinary team in protein folding simulation.
The proposed protein folding robotic agents open a new concept in folding protein modeling, where agents play a main role and propulse proactive methods that learn to counteract harmful aspects of virus functioning. A robotic agent could in principle find by itself new intelligent proteins not found in nature, targeted to specific parts of the virus. It may eventually interact with other advanced protein folding programs platforms like AlphaFold and folding@home  .
The expected tangible result of this project are:
A software tool for the intelligent synthesis of anti-virus proteins.
Two scientific papers.
Two conferences to be presented at Universities and / or Research Center.
Current problem description.
To identify the domain characteristics that indicate the appropriateness of an agent- based solution.
Study and detailed analysis by which a protein chain a acquires its native 3-dimensional structure.
Study of the amino acids chains forming, molecular reaction, organization, proteins folding
Conversion of amino acid chains into robotic structures.
Control of the robotic structure by neural networks.
Consolidate agents, environment, exploration, exploitation and rewards.
The design and construction of software tool.
Test, debugging and evaluation.
Papers development and writing.
Conferences, research activities.
MULTIDISCIPLINARY RESEARCH TEAM INTEGRATED BY COMPUTER AND AI SPECIALISTS, MOLECULAR CHEMIST, MOLECULAR BIOLOGIST, MATERIAL SCIENTIST, DATA ANALYST AND MEDICAL DOCTOR:
|Oscar Chang, PhD.||Artificial Intelligence. Robotics. Agents. Computer Modeling||Yachay Tech University|
|Luis Plaza MD. PhD.||Anatomopatologo – Inmunopatologo – Doctor of medicine and surgery||Universidad de Guayaquil|
|Antonio Diaz, PhD.||Macromolecular chemistry. Polymers||Yachay Tech University|
|Fernando Gonzales, PhD.||Molecular Biology. Production, purification and characterization of proteins.||Yachay Tech University|
|Gema Gonzalez, PhD.||Materials Science. Nanomaterials, biomaterials.||Yachay Tech University|
|Erick Cuenca, PhD.||Data Visual analysis.||Yachay Tech University|
|Luis Zhinin, B.Sc.||Artificial Intelligence, Computer Programming, Agents.||MIND Research Group|
REFERENCES: Pande, V. (2012, February 24). Protein folding and viral infection. https://foldingathome.org/2012/02/24/update-from-the-kasson-lab-at-the-university-of-virginia/  Kasson, P. M., Ensign, D. L., and Pande, V. S. (2009, August 19). Combining molecular dynamics with bayesian analysis to predict and evaluate ligand-binding mutations in influenza hemagglutinin. Retrieved from https://www.ncbi.nlm.nih.gov/pmc/articles/PMC2737089/  Chang, Oscar. (2019). Self-Programing Robots Boosted by Neural Agents.  Chang, Oscar. (2018). Autonomous Robots and Behavior Initiators. 10.5772/intechopen.71958.  Sutton, R., Barto, A.: Reinforcement Learning: An Introduction. Adaptive Computation and Machine Learning series, MIT Press (2018), https://books.google.com.ec/books?id=6DKPtQEACAAJ  Chang Oscar, Luis Zhinin. A Wise Up Visual Robot Driven by a Self-taught Neural Agent (2020). In press.  AlphaFold: Using AI for scientific discovery. (n.d.). Retrieved from https://deepmind.com/blog/article/AlphaFold-Using-AI-for-scientific-discovery
AUTHOR: LUIS ZHININ-VERA
March 4th, 2020.
Every year, billions of dollars are lost due to credit card fraud, causing huge losses for users and the financial industry. This kind of illicit activity is perhaps the most common and the one that causes most concerns in the finance world. In recent years great attention has been paid to the search for techniques to avoid this significant loss of money. In this degree project, we address credit card fraud by using an imbalanced dataset that contains transactions made by credit card users. Our Q-Credit Card Fraud Detector system classifies transactions into two classes: genuine and fraudulent and is built with artificial intelligence techniques comprising Deep Learning, Autoencoder, and Neural Agents, elements that acquire their predicting abilities through a Q-learning algorithm. Our computer simulation experiments show that the assembled model can produce quick responses with a remarkable accuracy value (98.1) and high performance in fraud classification, which is necessary for this model to be reliable and have relevance in future research.
This paper presents a credit card fraud detector basedon deep networks and reinforcement learning. It usesan auto-encoder and neural agent trained with a Q-learning algorithm working on imbalanced data set,treated by PCA and containing real credit card trans-actions. The system performs credit cards transactionclassification by combining supervised, unsupervisedand reinforcement learning. The proposed solutionworks as a quick acting intelligent agents and canidentify frauds with remarkably accuracy.The future scope is to adjust our solution to workin real-time banking systems. For this, a more com-plex database should be obtained (e.g., credit cardholder id, where the transaction was realized, sales-man id) and improve the solution if necessary.