• Welcome to the new COTI server. We've moved the Citizens to a new server. Please let us know in the COTI Website issue forum if you find any problems.
  • We, the systems administration staff, apologize for this unexpected outage of the boards. We have resolved the root cause of the problem and there should be no further disruptions.

Protecting All Citizens / Big Brother is Watching

That' s part of why I'm thinking base LL off TL instead of Pop/Gov.

The other half is as TL goes up, the potential destructiveness of even civilian tools makes it a necessary evil. But yes LL could also be more 'effective'- and prone to override by bribery.
 
I agree to an extent. But, I also expect the black market and citizens making reprisals whenever possible unless the government is truly an onerous one.

For example, it has become all the fun to smear a bit of Vaseline on the lens or box cover of surveillance cameras in many places were the "jokester" can get away with it.
 
Because we care

China has nothing on Netflix. The streaming TV service tweeted about its users watching habits today:

@Netflix:

To the 53 people who've watched A Christmas Prince every day for the past 18 days: Who hurt you?

It was funny until people started to think how creepy it was that Netflix knew what they were watching and could make educated guesses about why.

AI and big data is watching us, and it's figuring us out, eventually it will start predicting our future actions with a high degree of accuracy.
 
China has nothing on Netflix. The streaming TV service tweeted about its users watching habits today:



It was funny until people started to think how creepy it was that Netflix knew what they were watching and could make educated guesses about why.

AI and big data is watching us, and it's figuring us out, eventually it will start predicting our future actions with a high degree of accuracy.

I think more frightening than AI predicting our future actions is secretly influencing our future actions. When you make some descision is it your prerogative or an AI gaslighting you?
 
That' s part of why I'm thinking base LL off TL instead of Pop/Gov.

The other half is as TL goes up, the potential destructiveness of even civilian tools makes it a necessary evil. But yes LL could also be more 'effective'- and prone to override by bribery.

Perhaps both could be used. Mongoose and I think Cepheus have several rolls addressing getting in trouble with local law, The initial rolls getting in trouble could be left to LL (making a missstep). Prosecution/Defense uses TL maybe...
 
I think more frightening than AI predicting our future actions is secretly influencing our future actions. When you make some descision is it your prerogative or an AI gaslighting you?

Good question. A lot of this kind of AI is and will be used for targeted advertising and presenting the opportunity to purchase. Did you really want to buy that donut or did the fact that you were bombarded with images of donuts, aromas of coffee and donuts and the oppertunity to click on a "deliver donut" icon push you into getting that donut?

Now suppose you have a law enforcement AI built on the same basis as the advertising AIs. It analyses you, sees that your socio-economic profile says you're part of the population likely to commit crime. It presents you with the opportunity to commit a crime and monitors your response. You're arrested, found guilty, and removed from society. Is it entrapment? Or is it exploiting statistical analysis of your behavior?


Now I'm off to find a donut :coffeesip:
 
Now suppose you have a law enforcement AI built on the same basis as the advertising AIs. It analyses you, sees that your socio-economic profile says you're part of the population likely to commit crime. It presents you with the opportunity to commit a crime and monitors your response. You're arrested, found guilty, and removed from society. Is it entrapment? Or is it exploiting statistical analysis of your behavior?

I would think in this case the AI would be scrupulously fair. It would simply wait until the person who is profiled as likely to commit a crime breaks some law in the smallest degree.
Given that a society would likely have thousands upon thousands of laws there's every chance that within a relatively short period of time a person prone to breaking one, even accidentally, will and the AI is right there to bust them for it.
Worse, the more you get hit with small infractions the more the AI determines you will have additional ones in the future and the more it looks to nail you for them.
It is a Pygmalion with a feedback loop. Once predicted to break the law, the more laws you break and the more you get arrested for it. Worse, the AI never does any of this maliciously. There's no entrapment. The AI simply can by its nature enforce the gazillion laws to the letter against anyone and everyone all the time.
It would make Star Trek's Landru look lame by comparison.
 
I would think in this case the AI would be scrupulously fair. It would simply wait until the person who is profiled as likely to commit a crime breaks some law in the smallest degree.
Given that a society would likely have thousands upon thousands of laws there's every chance that within a relatively short period of time a person prone to breaking one, even accidentally, will and the AI is right there to bust them for it.
Worse, the more you get hit with small infractions the more the AI determines you will have additional ones in the future and the more it looks to nail you for them.
It is a Pygmalion with a feedback loop. Once predicted to break the law, the more laws you break and the more you get arrested for it. Worse, the AI never does any of this maliciously. There's no entrapment. The AI simply can by its nature enforce the gazillion laws to the letter against anyone and everyone all the time.
It would make Star Trek's Landru look lame by comparison.

This reminds me of a U. S. Grant quote.
I know no method to secure the repeal of bad or obnoxious laws so effective as their stringent execution.

Or are you saying that the AI is totally independent of any human control, and also cannot be reprogrammed. unplugged, or destroyed?
 
Depends on how it was programmed to learn, and what it was using as a template.


It took less than 24 hours for Twitter to corrupt an innocent AI chatbot. Yesterday, Microsoft unveiled Tay — a Twitter bot that the company described as an experiment in "conversational understanding." The more you chat with Tay, said Microsoft, the smarter it gets, learning to engage people through "casual and playful conversation."

Unfortunately, the conversations didn't stay playful for long. Pretty soon after Tay launched, people starting tweeting the bot with all sorts of misogynistic, racist, and Donald Trumpist remarks. And Tay — being essentially a robot parrot with an internet connection — started repeating these sentiments back to users, proving correct that old programming adage: flaming garbage pile in, flaming garbage pile out.

Now, while these screenshots seem to show that Tay has assimilated the internet's worst tendencies into its personality, it's not quite as straightforward as that. Searching through Tay's tweets (more than 96,000 of them!) we can see that many of the bot's nastiest utterances have simply been the result of copying users. If you tell Tay to "repeat after me," it will — allowing anybody to put words in the chatbot's mouth.

However, some of its weirder utterances have come out unprompted. The Guardian picked out a (now deleted) example when Tay was having an unremarkable conversation with one user (sample tweet: "new phone who dis?"), before it replied to the question "is Ricky Gervais an atheist?" by saying: "ricky gervais learned totalitarianism from adolf hitler, the inventor of atheism."
 
AI and big data is watching us, and it's figuring us out, eventually it will start predicting our future actions with a high degree of accuracy.

That's one of the interesting thing about the interface between TL and LL. The former allows for data to be collected, but the latter could be an indicator about how accessible that data is.

You can have all the data in the world, but if authorities can't access it then the data is useless to them. Note that there are laws in place in western countries (effectiveness is another matter) that are meant to prevent data on people being on-sold or otherwise simply transferred.

That' s part of why I'm thinking base LL off TL instead of Pop/Gov.

The other half is as TL goes up, the potential destructiveness of even civilian tools makes it a necessary evil. But yes LL could also be more 'effective'- and prone to override by bribery.

The nuances on variations even between the same LL from system to system should be considered. Weapons ownership can vary from place to place, even though the basic LL tables are centred around that. Access to different locations (not the right SOC so you can't access restricted public space, similar to a gated community but entry otherwise determined) doesn't seem to get much consideration. Neither does the idea that a high LL may put restrictions on private industry being able to retain the data they collect, as opposed to having to hand it over to the government. A high LL may also provide higher levels of protection for individual rights, but simply apply heavier penalties should the citizen fail to meet their legal responsibilities.
 
The nuances on variations even between the same LL from system to system should be considered. Weapons ownership can vary from place to place, even though the basic LL tables are centred around that. Access to different locations (not the right SOC so you can't access restricted public space, similar to a gated community but entry otherwise determined) doesn't seem to get much consideration. Neither does the idea that a high LL may put restrictions on private industry being able to retain the data they collect, as opposed to having to hand it over to the government. A high LL may also provide higher levels of protection for individual rights, but simply apply heavier penalties should the citizen fail to meet their legal responsibilities.

<Shrug> you could nuance a planet by using the LL to roll whether a law exists or not (or perhaps an existing law repurposed to fit a new situation).

Could be the planet allows lasers but bans knives for instance.
 
Could be the planet allows lasers but bans knives for instance.

That seems to come down to the will of a government to make laws that meet a society's expectations for well-being or safety, depending on how responsive a government is to its public, which would vary by government type.

If knives were part of a culturally dangerous series of practices that the government decided to stamp out it may do that. Or if it was a cultural expectation, such as Aslan residents on Regina clipping their dewclaws.
 
No one is allowed to cook, since dining is an enforced communal activity.

Subversive pancake making is a crime you know. Encouraging someone to make toast may be a misdemeanour, but actually wielding a skillet with delicious intent is another thing my friend, let me tell you.
 
Preparation of food in a substandard, unpermitted, and not regularly government inspected kitchen where all the workers are properly licensed and certified in every task from food preparation to dishwashing, is forbidden... :oo:

Yes, you need a dishwasher's license and be in the dishwasher's union to get a job washing dishes... :p
 
Back
Top