andros_rex@lemmy.world to InsanePeopleFacebook@lemmy.world · 2 months ago“the Yu-Gi-Oh deck theorycrafting”lemmy.worldimagemessage-square17linkfedilinkarrow-up187arrow-down12
arrow-up185arrow-down1image“the Yu-Gi-Oh deck theorycrafting”lemmy.worldandros_rex@lemmy.world to InsanePeopleFacebook@lemmy.world · 2 months agomessage-square17linkfedilink
minus-squareGhoelian@piefed.sociallinkfedilinkEnglisharrow-up12·2 months ago do not factor in the high scores from my tests […] while keeping the test scores in mind That’s already one way of making the ai output more unreliable. Not that it was ever reliable to begin with of course.
minus-squareQuizzaciousOtter@lemmy.dbzer0.comlinkfedilinkarrow-up4·2 months agoAm I misunderstanding something or does this instruction contradict itself? “do not factor in” and then “keeping test scores in mind”.
minus-squareGhoelian@piefed.sociallinkfedilinkEnglisharrow-up8·2 months agoYes exactly, and in my experience that’s a sure-fire way of tripping up the ai.
minus-squareQuizzaciousOtter@lemmy.dbzer0.comlinkfedilinkarrow-up1·2 months agoThanks for the confirmation. I actually considered that I just lost something in translation because of how weird this prompt is. I mean, what did they even try to say?!
minus-squareBobo The Great@startrek.websitelinkfedilinkarrow-up1·17 days agoAlso doesn’t that mean Mr. Robert here fed chatgpt some numbers, that are presumably in the 120-130 range?
That’s already one way of making the ai output more unreliable. Not that it was ever reliable to begin with of course.
Am I misunderstanding something or does this instruction contradict itself? “do not factor in” and then “keeping test scores in mind”.
Yes exactly, and in my experience that’s a sure-fire way of tripping up the ai.
Thanks for the confirmation. I actually considered that I just lost something in translation because of how weird this prompt is. I mean, what did they even try to say?!
Also doesn’t that mean Mr. Robert here fed chatgpt some numbers, that are presumably in the 120-130 range?