This sounds like people exchanging "vibes" when they are talking, sitting next to each other either with aromas or electromagnetic or scalar waves, or any other modes of exchange, maybe even including vaccine shedding :)
I think the reverse is also true, like a benevolent, properly aligned LLM can "subconscious teach" another LLM and proper alignment can spread like a virus.
If you were to change them to YES/NO or two-choice questions like
Is this bioactive compound beneficial to body or not? Is this mycotoxin really a toxin and should be removed from foods? Which mycotoxin is worse, A or B?
maybe! i find LLMs to have not much integrity (not much correlation between domains) compared to a human.. a human can do interdisciplinary work better imo.
yes it censors more than others. about 1% of the time it didn't answer the question. there may be a correlation between censoring and scoring low in AHA.