minkwe wrote:gill1109 wrote:Sorry Michel, you are now saying that you can get the negative cosine when you do something different from what I described.

Absolutely not, I'm doing exactly what you described. Perhaps now that I've told you want to do, you plan to change the goal posts.

I told you the domains of the functions A and B.

You did not. You told me the range of values that a, b, and u can lie in. You did not specify the domain which is 2-dimensional.

You ignored what I said.

Absolutely not. I read what you said carefully. Now you want to change the goal post.

You moreover added an ad hoc procedure to deal with the situation that (a, u_i) or (b, u_i) is not in the domain of A or B, respectively.

Absolutely not, I did no such thing. My functions take

and

just like you specified in your "computer experiment". Produce outcomes

just like you specified. Now you want to change the rules.

This is irrelevant. BTW it's not a trick, it simply shows that Bell did not have enough imagination of what was possible. That is why he made some very silly mistakes. But that's okay, some very smart people have also made silly mistakes before.

I beat your challenge and now you want to change the rules. Let me remind you what you said:

Suppose I dream up some functions A and B, taking values in {-1, 1}, which are functions of (1) a direction represented by an angle in the interval [0, 2 pi] and (2) of a number “u” in the interval [0, 1].

That's exactly what my functions do.

I said:

I dream up some functions A and B, taking values in {-1, 1}, which are functions of (1) a direction represented by an angle in the interval [0, 2 pi] and (2) of a number “u” in the interval [0, 1]. I write programs, in Python, say, which compute A and B for any given values of the two arguments.

Michel, you ignored a crucial sentence. *You* are talking about functions which take values in {-1, +1, “undefined”}.

By the way, there is some difference in use of the words “domain” and “range” in different mathematical traditions. England in the 70’s of the last century was different from mainland Europe at that time. Fashions change. Maybe that explains Michel’s misunderstanding?

From my maths it is evident that I do not allow ‘undefined’ as a possible value of the functions A and B. If you work through my derivation you can see exactly what I assume, if you had any doubts.

By the way, Michel’s model generates data such that the chance of accepting a data point depends on the difference between “a” and “b”. You will notice this when you plot the number of counted outcome pairs as a function of “a - b”. Please try it! Any physicist looking at your data will see that it is nonlocal in an unacceptable way.

Sorry I forgot about Fine’s work. I remember him more for his proof that the 8 one-sided Bell-CHSH inequalities are necessary and sufficient for LHV to describe the data, as long as you have no-signalling (on each side, individual outcome probabilities, given individual setting, do not depend on setting on other side). Maybe “local” is now arguing that Fine stole from Boole, 1850’s? Boole did the necessary and sufficient version of Bell’s original 3 correlation inequality.

I was inspired to think see coincidence counting as a loophole by Hess and Philip’s claims that Bell had not taken account of time. He had done, explicitly. Hess first said that I had plagiarised them, which is not true, so we wrote where we had got the idea from. Later Hess changed his tune: Pascazio had found it. I don’t recall Hess mentioning Fine, nor apologising that he had (though unknowingly) plagiarised Pascazio. It is true that Hess and Philip’s mathematical model which depended on probability densities of rho which did not integrate to 1 could actually be rewritten, on normalisation, as a detection loophole model. Hans de Raedt from Groningen (Netherlands) and his wife teamed up with Hess and did detection-loophole simulations based on it.