Tech Note T21: Estimating observables from sampled bit strings

Background.

An experiment on a quantum computer results in classical data. The experiment samples nominally-identical copies of the same state ρ of the system some number of times M ∈ Z ≥ 0 . For an n -qubit system, each of the M samples returns a single (classical) binary string, such as 0110 … , sampled from a distribution determined by ρ and the measurement basis. We call the empirical distribution of the M sampled bitstrings the counts, which is a stochastic integer vector over the set of all 2 n possible measurement outcomes, i.e., the binary strings of length n , denoted Σ ={ 00 … 00,00 … 01, … ,11 … 11 } . The classical counts data is post-processed to compute some desired statistic, such as the expectation of some operator ⟨ O ˆ ⟩ ; e.g., some Pauli string ⟨ ZIZI … ⟩ .

Question.

Given the sampling error, how close to true is our estimate of the derived statistic from the bitstrings? How does the sampling error in the bitstrings propagate through the computation? How many M shots should an experimentalist run to obtain a certain confidence?