The Rise of Self-Learning Space AI !?
Sobem Eresia-Eke
Grade 11
Presentation
No video provided
Problem
Hi, my name is Sobem Eresia-Eke and after reading my project’s title you’re probably wondering why would we need AI in space and specifically rover navigation. Well to answer your question let’s take a look back to the year 1999, a year marked by the end of a century, the first Matrix movie, and most importantly for our purposes, the year of the Mars Climate Orbiters Destruction. If you were to search the top 10 most famous Martian rover failures you’d probably stumble across this disaster repeatedly and the reason it stands out amongst so many others is because of how simply avoidable the entire ordeal was. Like with most NASA rover missions different groups of engineers worked on individual parts of the rover before collaborating and putting everything together however the fatal error in this circumstance was that one team had worked with imperial units while the rest used metric units which resulted in the aforementioned catastrophe of a rover mission.
Unfortunately, it doesn't end here, take the demise of the Opportunity rover at the hands of a global Martian sandstorm in 2019 for example. While acknowledging the fact that Opportunity is largely considered to be a great feat of space exploration (having been expected to shut down in 90 days but managing to last over 5,000) its untimely demise during one of the most intense Martain duststorms ever observed sheds light on the inaccuracy and unreliability of Martain weather predictions (akin to earth weather forecasts but of course much harder) as well as the extreme difficulty of predicting Martain dust storms.
While it is important to note that the fine folks at NASA and other Space agencies have drastically improved the reliability and performance of rovers since 1999 it is also important to keep in mind that the issues of human error and climate disasters remain very much real in our modern-day world ...or at least they were. This is where Artificial intelligence and my project step in, in this project I go over in detail the mechanics of how AI works, the different methods it's trained with, real-world examples of AI breakthroughs, and most importantly how AI can be applied in space exploration through Martain rovers.
Just imagine an AI-powered Martain rover that's able to analyze and find previously unknown connections in Martain weather patterns to make more accurate predictions, that's able to drastically reduce the possibility of catastrophic failure caused by human error and is able to quickly adapt to unforeseen and unpredictable scenarios and events far superior to what a human controller 3-22 light minutes away ever could. Let's jump in.
Method
For my research project I first conducted research on the different types of Machine Learning and their various strengths and weaknesses, from there I covered the real-life revoluntionary examples of Machine Learning models making breakthroughs in a variety of fields, from weather forecasts, to playing Tic Tac Toe, to determining the complex 3D structure of a folded protein. From there I tied everything back to the central idea of AI in space (specifically Martian rover navigation) by using the AI case studies and extrapolating from there the possibilities that AI would be able to accomplish in Martian rover navigation and in furthering space exploration and scientific discovery.
Research
The 4 Types of Machine Learning
So before we get started we need to cover the 4 major types of Machine learning (ML) methods; Reinforcement, Deep, Supervised and Unsupervised learning, but before we got to those we need to first understand what Machine Learning is in the first place. Machine Learning is essentially an umbrella term for all the different ways and techniques that an AI could be trained with and as we’re about to learn each of the four major ML methods have different strengths, weaknesses and characteristics.
First we have supervised learning which is where the algorithm is trained using carefully sorted "labeled data". A good metaphor to use is to envision a student studying for a test by answering practice questions and then checking over their answers with an answer book. This is essentially how a supervised algorithm is trained, it learns to sort through data by training with large datasets that describe what the AI should do in different scenarios. Which makes it very good at making predictions with data similar to how it's been trained with but the downside to this is that if (or most likely when) the real world doesn’t match the specific training ground these algorithms were trained in then their performance begins to suffer. It would be the equivalent of learning and practicing how to drive only during perfect, ideal weather conditions and then having your driving test during heavy snow and hail, odds are you probably won’t be all that prepared. In addition there's also a significant amount of human intervention that goes to play in the training of these algorithms which drastically slows the AI training process down.
Ok you're probably wondering what if instead of carefully spoonfeeding the AI expensive and time-intensive labeled data we handed it completely raw data and had the algorithm do all the work. Well that's exactly where unsupervised learning comes in. As mentioned above in unsupervised learning the AI is given fully raw data without being told what it’s being used for or even given detailed instructions on what to do with it, from this the AI analyzes said data to find any patterns or connections it can. This makes them very good at finding hidden patterns in datasets which might've been overlooked by human analysts but the downside too this however is that it's also really good at finding connections between data that just doesn't exist. For example take the below image of a cat for instance, to you or I this would just seem like a cat in an alleyway and nothing more. But to an unsupervised learning algorithm, trained on a limited dataset the mere fact that the cat is pictured in an alleyway could lead the model to make the flawed conclusion that all animals pictured in alleyways are cats (or are most likely to be cats).
Ok so just to review our first two ML types, we have supervised learning which is good at making accurate predictions but only with data that specifically matches the one it was trained with and also requires lots of human intervention and we have unsupervised learning which requires minimal human intervention but is susceptible to finding connections/patterns where they are none. Now you're probably wondering, what if there was a way to combine the ability to find connections in data with the ability to make predictions with that data without the tradeoff between accuracy and convenience. Well then let's hop into our next ML method, deep learning which is exclusively centered around learning from data using an artificial neural network (or ANN for short).
ANNs are an example of Artificial Intelligence heavily inspired by the inner workings/structure of the human brain which works because of a complex network made up of millions upon millions of neurons, all of which are connected to each other and transmit electric pulses (or "information") back and forth. It's because of this biological neural network that were able to problem solve, think critically and accomplish one of AI's most challenging tasks; identifying an animal in a photo. Take this above image for instance, to you and I we would've identified this as a cat in probably less than a second. But just to do this obviously simple task 3 different sections of our biological neural network had to be at play.
First is the input layer; these are the neurons closest to your eyes that directly received the raw visual information your eyes gathered. Next is the hidden layer which is everything that works behind the scenes, the neurons in this layer are by far the most numerous and arguably play the most important role in identifying this animal as a cat. An easy way to think about this layer is as a series of if/then/else loops similar to how a piece of code may run.
First it starts with a question like "does this animal have four legs" if it does it goes down one side of the if/then/else loops and asks follow up questions to narrow down which animal it is (does it have a tail, does it have fur, is it's fur black, etc). Then we have the output layer, these are the neurons at the end of the neural network that provide some kind of outcome, like the moment you realize that the image is of a cat. Another essential component of an artificial neural network that helps it improve and not make the same mistakes twice is the concept of weights. Take for instance a newly created ANN which is showed the same image of a cat, during the first few trials it would probably be very inconsistent and make it’s fair share of errors. However as the neural network begins to advance it starts to learn which neurons (or nodes) are more reliable in specific scenarios for instance the AI may determine that the node that says the above image is of a common house cat is more likely to be accurate than the node that says it's a small furry alien from Alpha Centauri (both somewhat reasonable guesses but the neural network begins to learn that creatures in photos are less likely to be aliens and more likely to be earthly creatures).
This means that the node that says the animal is a cat will be assigned a higher weight than the 2nd node, which essentially means that it's opinion is held as more valuable, which ensures that the neural network is less likely to identify the creature as an alien and becomes more likely to categorize it as a cat. Now that we have our Artificial Neural Network this is where deep learning comes into play, because these algorithms are often trained with labelled data they become very good at learning complex relationships between data, but for deep learning algorithm their greatest weakness is probably not one you would've expected.
So let's imagine the above toddler learning what a banana is for the first time, they’ll probably kinda struggle at first, but if you were to fast forward 10 years he would probably be a master at identifying all sorts of fruits and veggies right? Well here comes the part you probably didn’t expect, if you were to ask our now teenage boy how exactly he learnt to associate this yellow fruit with a banana do you think he would be able to tell you? Odds are, probably not, now imagine if our teenager had mistakenly learnt as a toddler that all yellow fruits are bananas and no one corrected him. Using this simple analogy I just explained the basics of the black box problem and it's why exactly it is a problem. The black box problem is essentially when we can't understand why the specific inputs a deep learning algorithm was given led it to make the output it did. This is a problem because it means it's much harder to understand (and therefore fix) the process behind why our algorithm may have had made an incorrect output.
To relate it back to our teenager analogy this would be like if our teenager thought that all yellow fruits are bananas but couldn't remember or tell you why he thinks so, which would obviously make it much harder to teach him why not all yellow fruits are bananas. Ok so if you were keeping track you've probably started asking yourself, "Ok so if we allow an AI to train itself using deep learning it would be really good at making connections in the data but if it found connections that in fact weren’t there it would be almost impossible to fix or know why it did so.” Well what if we got a human to tell the AI if it's on track or not and still get the AI to do most of the work". Well my observant listener that's exactly where Reinforcement learning comes in. In Reinforcement learning the AI improves in the desired task due to human feedback so a useful analogy to think about this would be to imagine you’re trying to train your new pet dog to sit. Unlike like how a GPS may direct a driver to their destination you as the owner can’t simply give your dog detailed step-by-step instructions on how to sit and despite your best efforts, the dog would inevitably have no idea what you want from it at the start.
However, the key to successfully training your dog (as any dog-owner can tell you) is instead through a system of rewards and punishments (i.e. the dog gets rewarded with a treat when it sits and isn’t rewarded when it doesn’t sit). Which biases your dogs brain towards the desirable action that gets a treat and away from the less desirable action that doesn’t get rewarded. As you’ve probably already guessed this is the exact way AI models are trained with reinforcement learning. They use human feedback to determine desirable and less-desirable actions and then orient themselves towards these desirable actions, however just like all other forms of ML, Reinforcement Learning also has it’s own disadvantages.
Namely the idea of sparse rewards, which is essentially the idea that when training a newly created AI you would likely have to wait an incredibly long time until the AI successfully completed the desired task by chance before rewarding them. To bring this back to our dog training example it would be the equivalent of trying to tell your new dog to sit 50 times which it responds by doing a series of random tasks and then waiting until it randomly discovers what “sitting” is before you give it a treat. This is a problem because potentially rewarding your AI or dog for coming close to the desired task may ultimately lead your AI to believe what the action it did is what you’re looking for, and on the other hand if you refused to give your AI the reward until it perfectly accomplishes the task by chance then you’ll likely be waiting a really, really, really long time.
Real world examples of AI in action
Ok, so now that we understand the four different types of Machine Learning let’s take a look at the different case studies of AI being applied in the real-world.
MENACE
Now let’s start our exploration into the real-world of AI with the revolutionary “AI system” MENACE (Matchbox Educable Noughts and Crosses Engine) designed for playing Tic Tac Toe, which although starting out terrible at the game, mostly making random moves, it’s able to learn and improve with the help of human feedback (an obvious example of reinforcement learning).
The only twist of course being that this system was not electronic or computer-based in the slightest but was instead made with a system of several thousand matchboxes with each of the matchboxes representing one of thousands of different Tic Tac Toe game possibilities and inside each box were a couple of differently coloured beads (which were randomly picked by a human helper) that represented moves that MENACE could make.
Whenever MENACE won a game two more beads of the same colour used would be added into each matchbox and when it lost, two of the beads of the same color used would be removed. So if MENACE lost badly it’s first match and the moves it used were all made by red beads then two of the red beads in each box used would be removed and if it won amazingly with green beads in it’s 2nd match then two green beads would be added into each matchbox used.
Over time this meant that the bad moves were less likely to be chosen at random while the good moves were more likely to be chosen which overall improved the performance of MENACE. As shown in the below image the number of beads added into the MENACE system slowly increased over-time which directly corresponds to the AI’s improved Tic Tac Toe performances.
ALPHA-FOLD
Ok, first things first, before we can understand what AlphaFold is and how it works we first need to understand the context behind this marvel of AI technology. So if you were to ask the average person you'd probably expect them to have at least a basic understanding of what proteins are. Most people would probably think of them as one of the necessary ingredients to grow muscle or as one of the several biomolecules (including carbohydrates, fats, etc). All if which are true but something most people don't know about proteins is just how versatile, and hence important all because of their unique ability to fold, so let's decipher what this means.
Proteins are made up of a combination of 20 amino acids, a protein can be made out of less than 10 amino acids or it could contain several thousand but the amino acids it's made up of are always a part of those 20. One way to think about this is to envision one of those make your own bracelet kits that have different beads and a string that you can you use to create you own bracelet. A protein is basically a bracelet made from a set of 20 different types of beads (or amino acids), it could be made up of only a few beads or a few thousand but those beads have to come from the set of 20. Now let's imagine we have our protein bracelet with all it's different amino acids beads looped through in a line, it's probably not very useful as simply a 1D line in space, instead the final step of creating our protein bracelet is to fold it into a useful shape, like a loop for our hand. Ok, now that you hopefully have a basic understanding of how protein folding works you may now be asking yourself; If protein folding is this simplistic why was AlphaFold created?
Well the truth is protein folding is far from this simplistic, to use another analogy, imagine you had a single sheet of paper, sure finding out what that paper is made up of wouldn't be that hard (or finding out which amino acids a protein contains) but imagine someone used origami to transform the paper into any number of 3D shapes but you had to decipher which shape it was without being able to see or touch the origami paper, things just got interesting right? This is the infamous protein folding problem that had stumped expert scientists, researchers and biologists for decades. Because proteins are so small, orders of magnitude smaller than what can be seen with a microscope, figuring out which 3D structure the protein folds into becomes insanely hard, and because proteins are so essential to all living things, from bacteria, to immune cells, viruses, etc understanding what shape a protein folds into could drastically improved medical treatments, disease prevention and much more.
Unfortunately the protein folding problem is made even worse by something called Levinthal's paradox which says that for each individual protein they are theoretically 10³⁰⁰ different folded structures it could adopt. Just to put that into perspective, the time it would take for a single protein to fold into all 10³⁰⁰ different configurations (assuming it folded into multiple configurations per second) would be far greater than the age of the universe (of 14 billion years). So it's not hard to see why this has often been painted as one of the great problems of biology for the last half a century. This is where the Critical Assessment of Structure Prediction (CASP) comes in, founded in 1994 the CASP competition was a once every two year event where contestants would try to predict the folded structures of a protein based only on the makeup of amino acids and the entries would be cross referenced with the the actual protein structure to determine the global distance test or GDT (which essentially means how accurate it is), with a score of 90 GDT being considered equivalent to the actual protein structure.
This is where ALPHA FOLD comes in, with the initial ALPHAFOLD1 being a simple repurposed computer vision algorithm that was trained on all the available data on known protein structures which made 1st place in CASP13 with an accuracy of around 50 GDT. It's successor however, ALPHAFOLD2 hit the ball completely out of the ballpark with an average accuracy of 92 GDT, to put that into context the this new model’s accuracy was within half the length of a carbon atom, a revolutionary level of precision that gave the most proven experimental methods a run for their money. If we were to peer beneath the hood of this new and improved ALPHA FOLD model we'll find that it works through a deep learning algorithm that (1) takes in the input amino acid sequence that makes up the protein and compares it to identical (but not exact) sequences that belong to other proteins with a known structure to find any possible connections and creates a rough diagram of what the protein could look like (2) refines this rough diagram and transforms it into the most likely structure and (3) transforms this refined structure into an actual 3D shape.
As you can see in the above images the blue parts of this protein structure is the shape ALPHA FOLD predicted while the green parts is the actual protein structure and lo and behold they line up almost perfectly.
SELF-DRIVING CARS
Although self-driving cars often seem like seem like a piece of sci-fi technology belonging only to the distant future, numerous advances in Machine Learning and technology have brought widespread Autonomous Vehicles into the very near future.
Self-driving cars use advanced sensors like radar, cameras and LiDAR (Light Detection and Ranging) to obtain a 360-degree view of their surroundings. For those of you who are wondering LiDAR is different from radar because instead of using radio waves to map out it's surroundings it uses infrared lasers to do so. This works by shooting out lasers one at a time around the environment and timing how long until the light bounced all the way back, which is a very accurate way to figure out how far away that object is.
This is a very important inclusion in Autonomous Vehicles because although camera technology has gotten significantly advanced they are still quite as susceptible to adverse light conditions like extreme glare, darkness and fog as human eyes are, which means that any algorithm controlling a vehicle would be able to perform much better in these adverse light conditions than any human could.
In addition it also uses a combination of supervised, unsupervised, reinforcement and deep learning (the 4 ML types we talked about earlier) to make decisions with said visual data. Supervised and deep learning are used to train the algorithm on how to recognize objects and accurately identify pedestrians, animals, lamp posts and everything in between. Unsupervised learning is used to find anomalies in the environment by comparing it to the model environments it was trained in, which can help the algorithm better avoid accidents and to be able to respond to situations that it was not specifically trained for.
Finally, reinforcement learning is an incredibly useful tool in improving an algorithms decision-making skills through a combination of rewards for safe decisions and punishments for unsafe ones. And another important use of deep learning neural networks in Autonomous Vehicles is their potential application in facial recognition to identify the driver and determine whether or not they're authorized to use the vehicle, drastically reducing the likelihood of vehicle theft.
However one very important side note when it comes training Autonomous Vehicle algorithms is that heavy consideration must be taken to ensure that the algorithms training environments are as varied as possible, because as we learned earlier with unsupervised and deep learning algorithms if for instance the Autonomous Vehicle was trained in environments were 90% of the others cars were blue then that runs the risk of the algorithm coming to the potentially dangerous conclusion that everything that's blue is a car.
GENCAST
Although Machine Learning based Weather Prediction (MLWP) models have shown a lot of promise compared to standard weather prediction methods in the past few years, even the best MLWP models have had their flaws and have been unable to compete with the most state-of-the-art weather forecasts to date... or at least they were. But before we get ahead of ourselves we first need to cover how exactly modern weather predictions are made. Traditional weather forecasts are made using numerical weather prediction (NWP) models, which use physics-based simulations of the atmosphere, based on data collected from weather stations, balloons, satellites, etc to create a model of what the current estimate of the weather is, which is then used to make multiple predictions of how the weather will unfold over time, with the average of these possibilities being used in the actual weather forecast.
Ok, now that you've gotten over your initial shock that weather forecasts aren't made by throwing darts at a dartboard you're probably asking yourself, if a great deal of science, technology and expertise goes into creating these physics based models then why are weather forecasts, especially forecasts of the future so inaccurate?
Well essentially we have two big factors to blame: the inherent unpredictability of the atmosphere and the data collected. In the world of meteorology and a lot of pop culture there's a concept known as the butterfly effect, one you're likely familiar with. The idea goes that the single flap of a butterfly's wings in one side of the world can result in a chaotic Rube Goldberg like effect that can eventually lead to a hurricane on the other side of the world. While the idea that a single butterfly can cause a hurricane does in fact feel like something out of a kids cartoon the core premise that seemingly tiny and unrelated changes can have massive effects is definitely one that holds true in the world of science and especially weather forecasting. Let's say you were given the task of predicting what the weather in downtown Calgary would look like three days from now, the standard approach would be to use previous weather data specific to only downtown Calgary to determine the most likely outcome for how the weather would unfold using the physics based models we talked about earlier.
The problem with this approach is that it doesn't take into account the multitude of other factors that would affect the weather in downtown Calgary, everything from the amount of UV light reaching the upper atmosphere, to the weather of the university district and even of Edmonton and beyond would have a slight (or not so slight) effect on the local weather. And as anyone who understands the law of compounding interest knows, these seemingly insignificant inputs add up to produce results far different from what you initially predicted. Ok now that we've covered why the atmosphere is so unpredictable let's dive into our second hurdle, the quality of collected weather data. Odds are you've probably come across the saying you reap what you sow. Well this quote especially holds true for those very same Numerical Weather Prediction (NWP) models we talked about earlier, essentially if you fed a weather model flawed or inaccurate data it gives flawed/inaccurate results.
Although this may not seem particularly mind-blowing, "well of course giving the model bad data gives you bad results", this concept of bad data has huge repercussions on just how accurate these NWP models can be. For instance if you were to use data collected from a weather station located in the university district, even this slight change in location, barely a few kilometers from downtown at most, can be enough to produce significantly skewed data that could render the models resultant prediction massively inaccurate. So, we just dove deep into the how and why that makes weather forecasts so inaccurate and you've probably already guessed by now this is where we introduce yet another revolutionary AI model, well my friend you’re spot on because this is where GenCast comes in. GenCast marks a significant turning point in the world of Machine Learning-based Weather Prediction (MLWP) which unlike previous AI weather models that determine a single "most likely" weather forecast from a range of possibilities, GenCast on the other hand composes an ensemble of 50+ predictions of the most likely predictions of what the future weather will be.
In addition GenCast was trained extensively on four decades worth of historical weather data at various altitudes, with different temperatures, wind speeds, etc. Which among other things has made the model perfectly adapted to the spherical geometry of the earth and take into consideration the complex interactions and relationships that affect global weather patterns.
For a real world example of GenCast at work look no further than the above image, here we see GenCast’s many predictions for the path the Typhoon Hagibis would take from a week away to a day away. As you can see in the 7-day forecast the model makes a multitude of different predictions as to where the typhoon will go, alot of which diverge significantly from the actual path but from the 5-day forecast and onward you can clearly see that the predicted paths diverge less from the actual path until it nearly perfectly aligns in the 1 day forecast. This remarkable performance by GenCast offers a very promising future of how AI an future MLWP models can be used to better track and predict potentially dangerous weather vents like cyclones, hurricanes, earthquakes, etc, giving emergency personnel plenty of time to practically evacuate individuals from potential danger spots and significantly reduce the amount of casualties and destruction these extreme weather events cause.
Ok now that we understand how models like GenCast are being used to predict weather conditions on Earth let’s take a step back and look at how this can be applied to our sister planet, Mars. As we discussed previously Martian dust storms are one of the most dangerous factors when it comes to exploring the Red Planet not only because the extremely fine and statically charged nature of the dust makes it easy to find to find it’s way into machinery (and potentially the lungs of human astronauts) and to render solar panels virtually useless but also because of just how unpredictable and chaotic Martian dust storms are. Martian dust storms are created when sunlight reaches the Martian surface and because of Mars extremely thin atmosphere uneven heating between the air in the upper atmosphere and air closer to the ground creates an imbalance, causing the cold air to fall and the warm air to rise, which creates wind and thereby dust storms as stray dust is taken along for the ride.
Sounds simple enough right? All we have to do is track which parts of the Martian surface get the most sunlight and then we know those parts have the most intense dust storms? Well if only it were that easy, in truth it gets a tad bit more complicated, according to atmospheric scientists Claire Newman and Mark Richardson they discovered through their research that the areas in Mars that received the most intense wind also didn't have a lot of dust to fuel sizeable dust storms, while dust storms were more common in areas that got average wind levels as it was easier for the dust levels to replenish. Another important thing to consider are the seasonal changes that occur in the Martian climate, as well as the presence or absence of mountains, valleys, ice caps, etc which can all affect wind speed and the likelihood of dust storms to some degree.
All these factors add up so much so that although smaller-scale storms are usually somewhat predictable, the larger, global dust storms are much more unpredictable and chaotic as a storm could move south one year and then the complete opposite direction the next or just not even appear.
Data
My project is a research project and I therefore am not working with any data
Conclusion
As I’ve hopefully been able to convince over the several minutes it took for me to present my project (or you reading it), the future of AI and Machine Learning is incredibly promising with models like ALPHA FOLD, GenCast, Autonomous Vehicle algorithms and even MENACE continuously pushing the limits of what was that of as possible with technology. To return back to the concept of AI in Martian rovers we learnt how models like GenCast could be used to better predict massive Mars-sized dust storms on the Martian surface which can not only be used to prevent future rovers from being pushed off into the deep end by the silent killer that is Martian dust but can also provide greater insights into the complex nature of Martian weather which would without a doubt further scientific discovery and mankind’s eventual settlement of the stars.
But the applications of AI and Machine Learning go far beyond even the world of space exploration, from AI-powered machines assisting during complex surgeries, to models that could optimize traffic flow and increase public safety in smart cities, to the emergence of an Artificial General Intelligence that could forever change the world as we know it today.
Citations
Research Sources
https://arxiv.org/pdf/2404.01116
https://www.astronomy.com/science/failed-mars-missions-a-brief-history/
https://airandspace.si.edu/air-and-space-quarterly/spring-2022/attack-martian-dust-storms
https://www.science.org/content/article/mammoth-dust-storm-mars-has-left-nasa-rover-dark)
10 Pros and Cons of Reinforcement Learning [2025] - DigitalDefynd
Highly accurate protein structure prediction with AlphaFold | Nature
https://en.m.wikipedia.org/wiki/CASP
https://medium.com/@katalesanket90/machine-learning-in-self-driving-cars-8b5d1c685d3b
https://www.rinf.tech/how-machine-learning-is-used-in-autonomous-vehicles/
Image Sources
https://img.freepik.com/premium-vector/cute-baby-cartoon-vector_776251-87.jpg?w=1380
https://dda.ndus.edu/ddreview/wp-content/uploads/sites/18/2021/10/selfDriving.png
Acknowledgement
I'd like to thank my parents who have always been there to support me and help me throughout every step of the research process and are the reason I strive to be the best version of myself possible