layout | title |
---|---|
default |
Home |
I like developing useful things and looking into fascinating new topics in Computer Science.
I have done my Bachelor's degree in Information Systems at the Technical University of Munich, complemented by an exchange Semester at the Hong Kong University of Science and Technology. Throughout my studies I have worked at the Institute of Computational Biology Munich, Deutsche Bank and PwC. I also worked as a teaching assistant at my university for courses on Computer Science in the context of Business and Applied Software Engineering.
I have written my project about a research topic that I have been interested in since before I started university. If you are a bit familiar with Artificial Intelligence and the structure that it operates on, you might have asked yourself, to what degree it reflects the way your brain operates. I have been wondering the same, ever since I learned about neural plasticity in high school. One of the most popular and impressive techniques of Artificial Intelligence is undoubtedly ChatGPT. The GPT architecture has been around for some time and is a classic example of an Artificial Neural Network with high capabilities. In order to replicate this architecture with similarity to the brain, I have looked into the structure of Spiking Neural Networks. "Spiking" refers to the capability of biological neurons to pass on the integrated charges it has received from previous neurons, by releasing transmitters in the intersynaptic cleft, exciting an adjacent neuron. This can be realized very energy-efficiently using neuromorphic hardware, however, due to the all-or-nothing behavior of the neurons, they do not inherently support a value range bigger than one. Instead, information may be propagated through the rate of spikes, or the exact timing. This field is still rather new, so even a weaker overall performance is a significant finding. If you are interested in looking further into my implementation, have a look at my repository.
At the Technical University of Munich, I have participated in the applied course iPraktikum, where student teams work together with industry partners to tackle real-life programs. I joined the team of industry partner Siemens, during which we created an orchestration system for industral robots. This was especially challenging because the program featured a user-base of both autonomous but also non-autonomous agents.
If you are interested in diving deeper into this project, check out our design review and client acceptance test on the site of the applied software engineering group.
Through my university exchange program, I spent a half year in Asia, at The Hong Kong University of Science and Technology (HKUST). My courses there focussed mostly on Machine Learning, Data Mining, Artificial Intelligence and Financial Technology. I enjoyed all of the courses very much, and can wholeheartedly recommend this university, if you are thinking about doing an exchange. If you would like to read more about my exchange experience, have a look at my experience report. On the academic side, my exchange was accompanied by a number of interesting topics and projects. For my subjects in financial technology, I did a case study on the growth of Spotify as an example on the rise of E-Commerce, as well as a written NFT contract in Remix, which, coupled together with a third party website and a sophisticated data validation, would constitute an identity for web3. For my course on Data Mining, I developed an ensemble model made up of a decision tree, a naive bayes model, and a neural network, which we applied after cleaning and upsampling a dataset on insurance fraud. Overall, the resulting model is estimated to decreases operation cost by about 40%. The course on Fundamentals of AI featured a final project, where I used unsupervised learning to optimize the navigation behavior of two autonomous agents in a large maze, which had to be prevented from crashing. Since, my last course, Machine Learning, was on a postgraduate level, the final project was a bit more theoretical in nature. Together with three other exchange students, I wrote a conference-style paper on the comparison of different deep learning image-to-image techniques, which we supplemented with a corresponding implementation in pyTorch. Have a look at our our presentation, if you would like to delve deeper into this project.
During my time at PwC I have worked on a very interesting topic, that gains more and more traction as the new capabilities of Artificial Intelligence slowly gains recognition in the general public. Especially the European Union has recently started looking into regulations for the responsible usage and design of such models. Correspondingly, PwC aims to take this into consideration for their technology consulting program, which is why they started to develop RAIVE: Responsible AI Validation Engine, which I spent a majority of my internship developing. More information on the general motivation can be found at the company website. To substantiate the idea with a solid academic foundation, I have reviewed a number of papers Explainable AI (XAI), and worked more closely with quantitative libraries, but also qualitative indicators such as user surveys.