Hi,
I just did a test which had two multiple choice questions. Each question was worth one point. Getting them both right would result in getting a 100% score. Suffice it to say, getting just one question right would give you 50% and with that a passing grade.
So you have two multiple choice questions. Both of which are unrelated to the other. Each question has four possible answers. When you finish the test. You get to have one more try. The questions and possible answers remain the same.
Let’s say you use both tries and you remember your previous two respected answers. What would your odds be, if you were to brute force guess your way through this test, to get a passing grade or a 100%?
Edit: Both questions only have one correct answer.
IMPORTANT EDIT: YOU DO NOT KNOW WHICH ANSWER YOU HAD RIGHT OR WRONG THE SECOND TIME AROUND. You only know how many questions you got right. But you don’t know which. Sorry for the confusion!
GPT-4 answered this. I will link it’s calculations down below in the next comment.
The probability of passing the test by brute force guessing (i.e., getting at least one question right) is approximately 68.36%. This includes the scenarios where you get one question right and one wrong, or both questions right.
Additionally, the probability of getting a perfect score (i.e., both questions right) by guessing is approximately 19.14%.
To calculate the odds of getting a passing grade (at least one question correct) or a perfect score (both questions correct) through brute-force guessing, let’s break down the scenario:
First, we’ll calculate the probability of guessing at least one question correctly. There are two scenarios where you pass:
For each question:
So, the probability of getting at least one question right in two attempts is ( 1 - \left( \frac{3}{4} \right)^2 ).
For two questions:
The overall probability of passing (getting at least one question right) is the sum of the probabilities of these two scenarios.
Now, let’s calculate these probabilities.
GPT-4 is a language model, and while it was an interesting take, it appears to be the wrong tool for the job.
The answer is wrong and without any documentation or proof showing the line of thought to determine the result it’s just a useless number.
Math is not really about the result. It is about understanding the process. Having an AI do that is completely against the purpose of asking this kind of questions in the first place. OP doesn’t need to know if the chance is 68% or 75%, but rather how to figure it out.