Wednesday, September 4, 2024

PhaseE

 Like it, or not, any procedure to compute various mathematical operations

in polynomial time on a quantum computer will need to use phase estimation.

And that is what the first part of the lesson on Schor's algorithm seeks to define.


Below, AI clearing things up on the matter.

From GEMINI PRO:




From META:

                                                                      


*     *     *
                                       *     *     *
You

Quantum omputing with qiskit; can you explain the phase estimation problem to me, in simple terms?

Copilot

Sure! The phase estimation problem is a fundamental task in quantum computing, especially useful in algorithms like Shor’s algorithm for factoring large numbers. Here’s a simplified explanation:

What is Phase Estimation?

Phase estimation is about finding the “phase” (a specific value) associated with an eigenvalue of a unitary operator. In simpler terms, it’s like trying to figure out a hidden angle that a quantum state has been rotated by.

How Does It Work?

Superposition: First, we put some qubits into a superposition state using Hadamard gates. This means they are in a mix of 0 and 1 states.

Apply Unitary Operations: We then apply a series of controlled unitary operations to another set of qubits. These operations are like rotations, and the amount of rotation is related to the phase we want to estimate.

Inverse Quantum Fourier Transform (QFT): After applying the rotations, we use an inverse Quantum Fourier Transform on the qubits. This step helps to decode the phase information from the superposition state.

Measurement: Finally, we measure the qubits. The result gives us a binary number that we can convert to the phase value.

Example
Imagine you have a qubit that has been rotated by an angle (\theta). Phase estimation helps you find out what (\theta) is. If (\theta) is (\pi/2) (90 degrees), the algorithm will help you determine this angle.

Why is it Important?
Phase estimation is crucial because it forms the basis of many quantum algorithms, including those for solving complex mathematical problems and simulating quantum systems12.

If you want to dive deeper, you can check out tutorials and resources on Qiskit, which is a popular quantum computing framework by IBM12.

Does this help clarify things? Feel free to ask more questions if you have any!





Surprising

 Today is promising for me. I will be going through IBM Lesson7 on Shor's

Algorithm...😀

                                                              *     *     *

Try as I would, I could not get any backend to run more than 4096 shots on my

simple circuit. Thus, I will move on from this. Did two runs on ibm_sherbrooke

last night. (There was no backlog, because yesterday was a maintenance day). 

Still thinking in terms of success, they were both somewhat surprising because

I ended up with 50%+ for 00 in both cases. 


Asking for measure_all():

                                                        


Asking for measure([0, 1], [0, 1]):


                                                               


                                                                          *     *     *

By clicking on the Bird outline, on the available backends page, one can see how many

pending jobs there are.

Sherbrooke, QC, Canada 10AM:

                                                                         

Kyoto, Japan 11PM:
                                  
                                                                               

Brisbane, Australia 12(MIDNIGHT):

                                                                         


                                                                             



Tuesday, September 3, 2024

Subroutine

 Complexity Theory, as applied to Quantum Computing, is  serious business. As an

initiatee to the field, suffice it to appreciate that Big O notation allows one to undertand

that polynomial operations, like addition at O(n), are most cost (or time) effective. Moving

on to multiplication and division, we are looking at exponential values ie O(n^2), quadratic for 

Common Denominator, and cubic for modulus exponentiation O(n^3). Well, actually not.

All of these can be accomplished in polynomial time, by using a certain set of gates for

operations.


These gates are the NOT, CNOT and Toffoli...CCNOT. And by employing these judiciously,

one can run classical operations as subroutines in otherwise clearly quantum operations.


A Toffoli gate:

                                                                    







                                                                                       

So the process involves creating ancillary bits on ket 0. The information from the original

input then gets transferred to what is now a quantum query gate. The intermediate operations

get erased, subsequently, by running the operation backward. 





Monday, September 2, 2024

More Trials

Re-did the QASM test with our code for a 50% probability, only upped the number of runs

from 1024 to 4096. The results were indeed better: we got 2075 successes out of 4096, thus .507

where we were getting 530 out of 2024 for .517.

                                                                                 



Real:

                                                                         

 
Ideal:




                                                                 *     *     *

Gemini 1.5 Pro

Binomial Dsitribution; help me understand why, when I run more trials, my results are more accurate but -at the same time - the probability of my actual result is lower…

When dealing with a binomial distribution, understanding the role of increasing the number of trials helps make sense of various results and their probabilities.


1. Law of Large Numbers: As you run more trials, the experimental or empirical results tend to get closer to the theoretical or expected value of the distribution. This is due to the Law of Large Numbers, which states that as the number of trials increases, the sample mean will converge to the expected value. This results in more accurate results in terms of capturing the true probability of success.


2. Probability of Specific Outcomes: As the number of trials increases, the binomial distribution becomes more spread out. With a larger number of trials, the variance increases, and the probabilities are distributed across more possible outcomes. This means that any specific result, particularly one that occurs at the tails of the distribution, will have a lower probability. In simpler terms, while the outcome you're observing might be more accurate in reflecting the underlying probability, the chance of observing exactly that outcome in a vast number of possibilities becomes smaller.


3. Central Limit Theorem: Additionally, as the number of trials increases, the binomial distribution tends to resemble a normal distribution, especially when the probability of success is not too close to 0 or 1. The larger spread of possible outcomes further reduces the probability of landing on any single specific number of successes.


In summary, more trials lead to results that more closely reflect the expected probability, but they simultaneously result in a lower probability for any specific number of successes due to the broader distribution of possible outcomes. 

                                                                *     *     *

Gemini:

What is the Central Limit Theorem, and what is it used for?

The Central Limit Theorem (CLT) is a fundamental concept in statistics that states:


"The distribution of sample means approaches a normal distribution as the sample size gets larger, regardless of the shape of the population distribution."

Here's a breakdown of what this means:

Sample Means: Imagine taking multiple samples from a population (e.g., measuring the heights of 30 people, then another 30, then another 30, and so on).  Each sample will have its own average (mean). 

Distribution of Sample Means: If you plot all these sample means, you'll get a distribution of means. 

Normal Distribution: The CLT says that as your sample sizes get bigger, this distribution of sample means will start to look more and more like a bell-shaped curve, which is known as a normal distribution. 


Why is this important?

Inference: The CLT allows us to make inferences about a population based on a sample. Even if the original population data isn't normally distributed, we can use the CLT to assume that the distribution of sample means is approximately normal. This is crucial for hypothesis testing and confidence intervals.

Estimation: The CLT helps us estimate population parameters (like the population mean) using sample statistics.

Simplification: The normal distribution is well-understood and has many convenient properties. The CLT allows us to use these properties even when dealing with data that isn't normally distributed.


How is it used?

Here are some common applications of the Central Limit Theorem:

Quality Control:  Companies use the CLT to ensure that products meet certain quality standards. They take samples of products and use the CLT to determine if the average quality of the entire production run is within acceptable limits.

Medical Research:  Researchers use the CLT to analyze data from clinical trials. They can use the CLT to determine if a new treatment is effective by comparing the average outcomes of patients who received the treatment to those who received a placebo.

Social Sciences:  Social scientists use the CLT to analyze survey data. They can use the CLT to determine if there is a difference in the opinions of two groups of people.

Finance:  Financial analysts use the CLT to model the behavior of stock prices. They can use the CLT to estimate the probability of a stock price going up or down.


In summary:

The Central Limit Theorem is a powerful tool that allows us to use the properties of the normal distribution to analyze data, even when the original data isn't normally distributed. This makes it a cornerstone of statistical inference and decision-making in various fields.

 


Amazing

 Amazing massage for varicose veins...

Sunday, September 1, 2024

Intrinsic

 You

Quantum physics; tell me about quantum measurements. Are the 50/50 results beacause of  measurement vagaries, or intinsically so?

Copilot

Quantum measurements are indeed fascinating! In the quantum realm, the outcomes of measurements are not predetermined but are instead probabilistic. This means that the 50/50 outcomes you mentioned are intrinsic to the nature of quantum systems rather than due to measurement vagaries.


When we measure a quantum system, such as the spin of an electron, the system is in a superposition of all possible states until the measurement is made. The act of measurement causes the system to “collapse” into one of the possible states. For example, if an electron’s spin can be either up or down, before measurement, it exists in a superposition of both states. Upon measurement, it collapses to either spin-up or spin-down with certain probabilities, often 50/50 if the system is unbiased1.


This intrinsic randomness is a fundamental aspect of quantum mechanics, known as quantum indeterminacy. Unlike classical systems, where uncertainty is due to lack of knowledge about the system, quantum systems are inherently probabilistic12.


Does this help clarify things? Feel free to ask more questions if you’re curious about other aspects of quantum physics!

Weighing Matters

Looking back to my QASM trial results:

                                                                

From Copilot, whom I asked to explain the results:

                                                                      Sure! Based on the image you provided, here are the key points:
  • Probability of Success: You set the probability of success for each trial at 0.5.
  • Number of Trials: You conducted 1024 trials.
  • Number of Successes: You were interested in the probability of getting exactly 530 successes.

The results are:

  • Binomial Probability: The probability of getting exactly 530 successes is 0.01325.
  • Cumulative Probability (≤530): The probability of getting at most 530 successes is 0.86297.
  • Cumulative Probability (≥530): The probability of getting at least 530 successes is 0.13703.

These results help you understand the likelihood of different outcomes in your quantum computing trials. If you have any specific questions about these results, feel free to ask!

                                                                    *     *     *

On Gemini, to whom I fed the 'ideal' result as well:


                                                                            






   
1/1