Welcome to Frideswide the Artificial Intelligence that learns with you.
Q1: How does it work?
The app lets you attempt various quizzes (best of 5 MCQs) and analyses your answers to guide the composition of the next quizzes, in the aim of stimulating and optimising your learning process. It is powered by the AI algorithm called Frideswide, which is the backbone of the analysing process.
Q2: Where does it get the questions from?
pharMaxology uses a simple database of questions and answers, grouped together in two columns. Virtually every list can be analysed and rendered by the app, and at this point, as the name itself implies, it contains a list of common drugs used in the UK. Soon, we’ll expand the database and enable users to upload their own lists, too.
Q3: Do I have to pay for it?
No. It’s entirely free and open-source, so please eat and drink ye all of it!
Q4: Who’s behind it?
The Oxford Neurological Society decided to back up efforts of the undersigned, to help this project gain some traction. Do you want to help? That’s great! Sign up to get updates, follow us (or me) on Twitter or contact us about what experience you could contribute to the project. Any coding / graphics / marketing skills or simply good ideas are welcome!
Q5: How much time did it take you?
2.5 weeks, 52 double espressos and 1,287 lines of code (excluding templated)
Q6: When’s the mobile version due?
Ehrmm… when it’s ready, OK? Next question!
Some screenshots from the app
Fig.1 The main menu of the app, with all the relevant statistics and measures of progress to guide your learning
Fig.2 All quizzes follow a simple structure of best-of-five MCQs, which is the most common format of the standardised tests
Fig.3 After each quiz, the Frideswide algorithm will analyse your answers and present a short summary. You can revise the drugs you didn’t get staright away or go for a harder / easier version of the quiz
Some real questions now:
OK – this may be a bit more complicated at the first glance, but bear with me and you’ll understand how simple it actually is.
The overall goal of the app is to teach you as much as possible in the shortest time possible, with best methods available.
This is, in the AI nomenclature, the reward signal, and it represents a huge hurdle in all deep learning AI design.
I’ll explain why: Imagine you’re a waiter in a busy café, and your pay depends entirely on the amount of sales and tips you may get from your visitors. The payment, naturally, comes at the end of the week.
There are a few ways you can increase your sales: being nice to people, providing excellent customer service, mastering the latte art, etc. Intuitively, all will lead to better tips and more happy customers returning to the café for more coffees.
What if you really want to optimise your earnings and push it to the limit?
You can’t be certain if any of the standard ways is getting you more cash than the other. Very rarely can you see a customer saying:
“OK, my good boy, here’s your £10 tip. £2.50 for the latte art I enjoyed, £4.40 for using “Sir” and saying “good morning” and the remainder for wearing a tight black top, beautifully exposing your abs and pecs, but I obviously cannot say that, so let’s just pretend it was for your… good rapport with the customer!”
As you can see, even this unrealistically detailed explanation could not have given you exact information about what to improve to get more money.
Now, consider this example:
What if you want to really, really, push the limits and start thinking about all the random things you can do to impress your visitors. Polka dance whilst serving a flat white – why not? Yelling “Ad Verrem” pretending your customers are morally bent judges and your colleague is a corrupted governor of Sicily who defrauded public money – probably more risky.
But who knows? Maybe you’ll find some secret dance lovers who will tip you more for performing their favourite art? Perhaps the classically-educated clientele will be able to compensate the potential loss of hearing in lieu of listening to a masterpiece of Cicero’s rhetoric, and will pay more than you’d have lost from the customers who could not stand your Latina vulgaris and had left?
The answer is: no one knows. You need to try it.
Thus, we quickly come back to our initial problem: there is no clear connection between what you do and what you earn at the end of the week. You can’t pre-program yourself: do X to get £5, do Y to get £10.
The connection between action and pay-off is too convoluted for that simplistic approach to work.
I think that’s enough of metaphors. Let’s wrap it up and translate into the AI work.
- Frideswide (FW) will start off with some preinstalled methods of teaching (you can disable it if you want to start from a blank sheet!), just like you try the conventional methods of earning more tips at the café.
- These methods will be marked against your progress of learning, and the expansion of your knowledge will be the pay-off the FW gets.
- FW will analyse what combination of methods gave you highest results in quizzes and quickest acquisition of knowledge.
- All this information will be used to improve the FW algorithm so that it can tailor the teaching methods to boost your learning.
- Every so often, Frideswide will invent a new, completely random method of learning. It will then present it to you and see how you get on with it. If it fails miserably, it will be downgraded; if it’s a perfect fit, it will get incorporated into the algorithm. [this part is the most exciting! I’ll tell you why later, or fast forward here]
- Nothing is set in stone: every new answer is analysed separately. The methods that worked for you a month ago may not work for you now, and the FW algorithm will seamlessly adjust to that.
Let’s go behind the scenes and explore
Step 1: Setting up a quiz
The quiz is usually a collection of 10 questions. However, these aren’t picked entirely at random: every quiz will contain some mixture of “easy” (E) “difficult” (D) and “random” (R) questions.
The ratio of E:D:R is represented by the diamonds in the Frideswide’s panel. This, in FW algorithm, is called a METHOD.
Since every person is different, each user will have a different set of methods that works for them: some will prefer to be encouraged by getting lots of easy questions, others will prefer a challenge of a more difficult ratio, or a thrill of a randomised set – just like there are different clients in our café metaphor.
If you like a given METHOD, it will be upvoted with the amount of your score from that quiz (which can range from -1 to +1, depending on how many questions you answered correctly).
But, you may ask, what counts as EASY? Or DIFFICULT? Surely, that means lots of things to lots of people?
You are right. That’s why there is another layer of analysis, called METHOD DEFINITIONS, which is a set of personalised definitions of difficulty.
Step 2: Completing the quiz and analysing the answers
In my case, for example, the most successful definition was:
[[‘E’, 0, 6], [‘H’, -5, -1], [‘R’, 2, 7]]
Which means that easy questions are those with difficulty score of 0 to 6, difficult are ones with score -5 to -1 and random are somewhere between 2 and 7.
What’s a DIFFICULTY SCORE, you may ask?
This one’s easy: every wrong answer to the question will downvote it by -1 and every good answer will upvote it by 1.
So, if you correctly guessed what alteplase is used for 3 times, it will get a difficulty score of 3, i.e. pretty easy. Loperamide marked incorrectly 7 times is marked -7, a score you can rectify by correctly answering the question about that drug another couple of times (say, 5, to get a difficulty score of -2).
We’re nearly there. The overall aim is to memorise the entire set of lists. This counts as (and I admit this is entirely arbitrary and I welcome any opposition to that, in particular) having a difficulty score of minimum 2 in all drugs from the list.
Surely FW will give you easy questions to get easy +1 scores, giving you a false sense of security. However, remember that the ultimate pay-off is memorising the entire list!
The AI, therefore, will aim to get you to that as soon as possible: repeating easy questions will get you stuck with some part of the list perfectioned and the other part barely touched.
Thus, as questions get easier (and you get better), the FW will have to get you some of those more difficult questions to complete the list, as giving you easy ones will become less and less attractive in getting the pay-off, just like repeated tweaks of steam wound alone won’t get you into the echelons of baristas.
Now that we’ve ploughed through the dullness of the coding soil, let’s explore one of the greatest and most exciting features of the Frideswide system.
Every now and then, it will generate a completely random bit of code and incorporate it into the algorithm.
The natural provenance of that idea is uncanny. Just like a random mutation in the species is the driver of evolution, so is the random code tweak’s role in improving the FW algorithm. And just like this new, mutated individual need to compete for resources and survive as the fittest in the environmental niche, so does the piece of code: competing for your attention and learning advances.
Just like in nature: some mutations are beneficial in certain niches, and so some code tweaks will be more beneficial for one learner than the other.
With this randomisation, we try to mimic, if not reverse-engineer the evolution itself.
I appreciate that, just like in the real world, most of the mutations will be disadvantageous.
But with the calculating power of the machines we’ve got, we can speed up the process, potentially getting learning methods that we never would’ve thought could exist and work so brilliantly.
Alas, just as you can never know how many shy Polka fans are visiting your café, so will we always hope that the future will be better tomorrow.
But wait, what if it wants to take over the world?!?!?!?
Hmmm we’ll have to see about that…
Give it a go!
#1 Add reverse quiz option (application of a drug -> guess the name of the drug)
#2 Test on Mac OS-based machines
#3 Produce mobile version
Credits & learn more
I’d like to wholeheartedly thank:
Matt Kobetić for his invaluable help in bug fixes, testing and troubleshooting the app
Tom Robb for being the brain behind the name
Julia Cheong for her brilliant suggestions, bugfixes and ongoing support for the project
If you want to learn more about the AI
I couldn’t recommend enough the Machine Learning Course from Oxford’s Department of Computer Science
There’s also a reading list for those who want to go and explore
And the github project of pharMaxology (beta 0.2 now migrating to 0.3)
For those who don’t like reading: