• Welcome to the new COTI server. We've moved the Citizens to a new server. Please let us know in the COTI Website issue forum if you find any problems.

A Criticism of the Bell Curve

Having run parties where everyone had very similar skill sets, I've found no basis for TBeards assertion.

Roleplaying can stem from rules play, or from character play. Diversity of skill sets breeds specialization, not cooperation. In that, in the specialist party, everyone has their field, and the others are often left cold during that character's "Limelight."

Overlapping skills AND players who are willing to cooperate breeds cooperation in play.

A ballance of the two is needed.

Munchkins don't seek either... they simply want to hog all the face time.
 
Originally posted by Aramis:
Having run parties where everyone had very similar skill sets, I've found no basis for TBeards assertion.

Roleplaying can stem from rules play, or from character play. Diversity of skill sets breeds specialization, not cooperation. In that, in the specialist party, everyone has their field, and the others are often left cold during that character's "Limelight."

Overlapping skills AND players who are willing to cooperate breeds cooperation in play.

A ballance of the two is needed.

Munchkins don't seek either... they simply want to hog all the face time.
<shrug>

I cannot evaluate your experience, so I speak from my own (28 years).

However, I will stipulate that roleplaying skill (and referee skill) can overcome formidable disadvantages in the game system. However, I do not think that this gets the game designer off the hook. Games should be designed so that they do not *force* players and referees to make up for dubious mechanics.
 
Originally posted by Aramis:
Having run parties where everyone had very similar skill sets, I've found no basis for TBeards assertion.

Roleplaying can stem from rules play, or from character play. Diversity of skill sets breeds specialization, not cooperation. In that, in the specialist party, everyone has their field, and the others are often left cold during that character's "Limelight."

Overlapping skills AND players who are willing to cooperate breeds cooperation in play.

A ballance of the two is needed.

Munchkins don't seek either... they simply want to hog all the face time.
<shrug>

I cannot evaluate your experience, so I speak from my own (28 years).

However, I will stipulate that roleplaying skill (and referee skill) can overcome formidable disadvantages in the game system. However, I do not think that this gets the game designer off the hook. Games should be designed so that they do not *force* players and referees to make up for dubious mechanics.
 
Originally posted by tbeard1999:
....
If you simply *had* to have a bell curve, I suggest 2d10.
...
And a d20 gives you plenty of room for 5 categories of modifiers.
Like S4 I soend a lot of time thinking about these things and have settled on 2D10 myself.

Some of the very things you dislike about a non-linear mechanic, e.g., diminishing returns on modifiers, +5 about the same as +6, is something I like. I also like that +1 means a lot more.

But modifier balance is a key concern in a non-linear system. What I've learned is keep modifiers small and few. As I think S4 mentioned, this forces a certain amount of design discipline and should prevent massive modifier tables, and multiple levels of modifiers. Some may want to add multiple levels of modifiers, there a d20 or d100 system works better.

I assume we are talking about mechanics in general. Both a 2D6 and d20 approach can be broken, have scaling problems, etc. Of course what some may see as a scaling problem others see as a feature.
 
Originally posted by tbeard1999:
....
If you simply *had* to have a bell curve, I suggest 2d10.
...
And a d20 gives you plenty of room for 5 categories of modifiers.
Like S4 I soend a lot of time thinking about these things and have settled on 2D10 myself.

Some of the very things you dislike about a non-linear mechanic, e.g., diminishing returns on modifiers, +5 about the same as +6, is something I like. I also like that +1 means a lot more.

But modifier balance is a key concern in a non-linear system. What I've learned is keep modifiers small and few. As I think S4 mentioned, this forces a certain amount of design discipline and should prevent massive modifier tables, and multiple levels of modifiers. Some may want to add multiple levels of modifiers, there a d20 or d100 system works better.

I assume we are talking about mechanics in general. Both a 2D6 and d20 approach can be broken, have scaling problems, etc. Of course what some may see as a scaling problem others see as a feature.
 
Originally posted by Ptah:
</font><blockquote>quote:</font><hr />Originally posted by tbeard1999:
....
If you simply *had* to have a bell curve, I suggest 2d10.
...
And a d20 gives you plenty of room for 5 categories of modifiers.
Like S4 I soend a lot of time thinking about these things and have settled on 2D10 myself.

Some of the very things you dislike about a non-linear mechanic, e.g., diminishing returns on modifiers, +5 about the same as +6, is something I like. I also like that +1 means a lot more.

But modifier balance is a key concern in a non-linear system. What I've learned is keep modifiers small and few. As I think S4 mentioned, this forces a certain amount of design discipline and should prevent massive modifier tables, and multiple levels of modifiers. Some may want to add multiple levels of modifiers, there a d20 or d100 system works better.

I assume we are talking about mechanics in general. Both a 2D6 and d20 approach can be broken, have scaling problems, etc. Of course what some may see as a scaling problem others see as a feature.
</font>[/QUOTE]I don't think that there's anything inherently wrong with a bell curve method vs a single die method. I just think that bell curve mechanics, especially ones based on d6s, are overrated. They are also deceptively easy to break -- Mercenary for example.

In my opinion, designers should ensure that modifiers to success rolls are appropriate to the mechanic needed. And unfortunately, few seem to do so.

If you're gonna use 2d6 (8+ being success), for instance, there should be a very small range of likely modifiers, because a mere +3 assures that you will succeed a great deal of the time (83%). Similarly, a -2 will assure that you will fail most of the time (83%).

If you want lots of modifiers -- such as appears in CT -- then you need something other than 2d6 in my view. 2d10 is okay; it might have the range needed to accomodate the various CT modifiers as they are...or maybe not. Even with a 2d10 mechanic (success on 12+), it only takes a -4 to make it highly likely you'll fail (85%). And a +5 will make it highly likely you'll succeed (85%). A +6 will virtually guarantee it (90%). It isn't very hard to get a +6 with Mercenary's character generation system (my Army characters seldom wind up with fewer than 4 points of Gun Cbt, which if applied to Cbt Rifle is a +4 by itself), and Mercenary weapons (the +2 DEX modifier only requires a DEX of 8+; the autofire modifiers for the ACR and assault rifles average a 4 point shift).

2d10 would be a significant improvement and would move CT from "badly broken" to "somewhat broken" IMHO.

The ideal solution, in my mind is the d20, with success on a 12+. It takes a +8 to reach 85% chance of success and a +9 to reach 90%.

Of course, a reconsideration of what various skill levels mean would be needed in either a 2d10 or a d20 system. A level of 3 would only mean a 15% shift in a d20 system, vs a 42% shift in a 2d6 system or a 21% shift in a 2d10 system.

Book 1 and COI characters would be hosed unless they got a lot more skills.

All in all, I'm now thinking I'd rather just try to fix the 2d6 system. It isn't difficult, conceptually. The number of skills available to characters need to mirror Book 1. That's an easy fix -- Use Book 4+ to determine the career if desired, but limit the number of skills to the number they'd get with Book 1. I'd have the player delete excess skills at the end of each term.

And of course, combat modifiers need to be reigned in (which I did in my "Fixing CT Combat" thread) so that weapons have reasonable chances of hitting and penetrating.
 
Originally posted by Ptah:
</font><blockquote>quote:</font><hr />Originally posted by tbeard1999:
....
If you simply *had* to have a bell curve, I suggest 2d10.
...
And a d20 gives you plenty of room for 5 categories of modifiers.
Like S4 I soend a lot of time thinking about these things and have settled on 2D10 myself.

Some of the very things you dislike about a non-linear mechanic, e.g., diminishing returns on modifiers, +5 about the same as +6, is something I like. I also like that +1 means a lot more.

But modifier balance is a key concern in a non-linear system. What I've learned is keep modifiers small and few. As I think S4 mentioned, this forces a certain amount of design discipline and should prevent massive modifier tables, and multiple levels of modifiers. Some may want to add multiple levels of modifiers, there a d20 or d100 system works better.

I assume we are talking about mechanics in general. Both a 2D6 and d20 approach can be broken, have scaling problems, etc. Of course what some may see as a scaling problem others see as a feature.
</font>[/QUOTE]I don't think that there's anything inherently wrong with a bell curve method vs a single die method. I just think that bell curve mechanics, especially ones based on d6s, are overrated. They are also deceptively easy to break -- Mercenary for example.

In my opinion, designers should ensure that modifiers to success rolls are appropriate to the mechanic needed. And unfortunately, few seem to do so.

If you're gonna use 2d6 (8+ being success), for instance, there should be a very small range of likely modifiers, because a mere +3 assures that you will succeed a great deal of the time (83%). Similarly, a -2 will assure that you will fail most of the time (83%).

If you want lots of modifiers -- such as appears in CT -- then you need something other than 2d6 in my view. 2d10 is okay; it might have the range needed to accomodate the various CT modifiers as they are...or maybe not. Even with a 2d10 mechanic (success on 12+), it only takes a -4 to make it highly likely you'll fail (85%). And a +5 will make it highly likely you'll succeed (85%). A +6 will virtually guarantee it (90%). It isn't very hard to get a +6 with Mercenary's character generation system (my Army characters seldom wind up with fewer than 4 points of Gun Cbt, which if applied to Cbt Rifle is a +4 by itself), and Mercenary weapons (the +2 DEX modifier only requires a DEX of 8+; the autofire modifiers for the ACR and assault rifles average a 4 point shift).

2d10 would be a significant improvement and would move CT from "badly broken" to "somewhat broken" IMHO.

The ideal solution, in my mind is the d20, with success on a 12+. It takes a +8 to reach 85% chance of success and a +9 to reach 90%.

Of course, a reconsideration of what various skill levels mean would be needed in either a 2d10 or a d20 system. A level of 3 would only mean a 15% shift in a d20 system, vs a 42% shift in a 2d6 system or a 21% shift in a 2d10 system.

Book 1 and COI characters would be hosed unless they got a lot more skills.

All in all, I'm now thinking I'd rather just try to fix the 2d6 system. It isn't difficult, conceptually. The number of skills available to characters need to mirror Book 1. That's an easy fix -- Use Book 4+ to determine the career if desired, but limit the number of skills to the number they'd get with Book 1. I'd have the player delete excess skills at the end of each term.

And of course, combat modifiers need to be reigned in (which I did in my "Fixing CT Combat" thread) so that weapons have reasonable chances of hitting and penetrating.
 
Originally posted by tbeard1999:
If you're gonna use 2d6 (8+ being success), for instance, there should be a very small range of likely modifiers, because a mere +3 assures that you will succeed a great deal of the time (83%). Similarly, a -2 will assure that you will fail most of the time (83%).
In reality, there are plenty of tasks where a novice has a very small chance of success, and an expert is nearly guaranteed success. There are also plenty of tasks where the novice has no chance, and the expert has a lot of trouble, or where a novice can do it easily and an expert has no meaningful chance of failure. You just have to accept that some characters will be much better at specific tasks than others.
 
Originally posted by tbeard1999:
If you're gonna use 2d6 (8+ being success), for instance, there should be a very small range of likely modifiers, because a mere +3 assures that you will succeed a great deal of the time (83%). Similarly, a -2 will assure that you will fail most of the time (83%).
In reality, there are plenty of tasks where a novice has a very small chance of success, and an expert is nearly guaranteed success. There are also plenty of tasks where the novice has no chance, and the expert has a lot of trouble, or where a novice can do it easily and an expert has no meaningful chance of failure. You just have to accept that some characters will be much better at specific tasks than others.
 
Originally posted by Anthony:
</font><blockquote>quote:</font><hr />Originally posted by tbeard1999:
If you're gonna use 2d6 (8+ being success), for instance, there should be a very small range of likely modifiers, because a mere +3 assures that you will succeed a great deal of the time (83%). Similarly, a -2 will assure that you will fail most of the time (83%).
In reality, there are plenty of tasks where a novice has a very small chance of success, and an expert is nearly guaranteed success. There are also plenty of tasks where the novice has no chance, and the expert has a lot of trouble, or where a novice can do it easily and an expert has no meaningful chance of failure. You just have to accept that some characters will be much better at specific tasks than others. </font>[/QUOTE]Oh I agree. I don't object to characters being good at some things.

My problem is what happens if the range of modifiers exceeds what the system can handle.

The most noticeable consequence in CT is that it is easy for almost *any* character to hit nearly all the time, if he chooses automatic weapons. And it's not terribly difficult to hit almost all the time with civilian weapons either. The same is true of NPCs, by the way. The reason is that it is too easy to get statistically significant bonuses. And while a clever referee can impose penalties to mitigate this, I resent being forced to do so.

And as noted before, bell curves reward relatively modest net modifiers and make really high net modifiers of minimal extra value. On a 2d6 system like CT, there's just not much statistical difference between a +3 (83% chance of success) and a +6 (100% chance of success). However there *is* a significant difference between no bonus (41%) and a +2 (72%).

And in combat particularly, it's very easy to wind up with net modifiers of 4+. With Book 4 weaponry, it's nearly impossible to avoid.

So my critique boils down to 2 things:

1. A bell curve system is far more granular than it first appears, due to the fact that net modifiers beyond a very modest amount produce little additional real world benefit. In the case of CT, a +3 makes success nearly automatic (83%).

2. This means that applicable modifiers must be extremely limited both in quantity and in size, unless you happen to like nearly automatic success or nearly automatic failure being common in games. I don't, by the way.

IMHO, these flaws cast serious doubts on the purported usefulness of bell curve systems in RPGs.

For a very simple example, consider a 2d6 system with 8+ being a success vs a 1d12 system with 8+ being a success. In both systems, an unmodified roll will succeed 41% of the time. However, look at how positive modifiers affect the chance of success (first is success chance in 2d6 system, second is success chance in 1d12 system).

+0 - 41% vs 41%
+1 - 58% vs 50%
+2 - 72% vs 58%
+3 - 83% vs 67%
+4 - 91% vs 75%
+5 - 97% vs 83%
+6 - 100% vs 92%
+7 - 100% vs 100%

See the difference? It only takes a +2 to get to a high chance of success (72%) in the 2d6 system, while it takes a +4 to get there in the 1d12 system.

Here are the success chances with negative modifiers:

-0 - 41% vs 41%
-1 - 28% vs 33%
-2 - 17% vs 25%
-3 - 8% vs 17%
-4 - 3% vs 8%

In other words, the d12 can accomodate a somewhat greater range of modifiers than the 2d6 system can -- about 50% more. (Not advocating a d12 based system for CT; just using it as an illustration).

This fact is neither good nor bad, but it has profound implications for the game designer -- a 2d6 system cannot handle a lot of modifiers nor can it handle modifiers that are much more than +1. Unless the designer is willing for virtually automatically success or failure to characterize the game.

Some have said, "well, the referee can just create some negative modifiers to offset high bonuses".

Even stipulating that this is so, there are 2 problems with this idea.

The first is that a good referee can make up for a lot of flaws in a game design, but this shouldn't excuse louse game design. Why should the referee be *forced* to do so?

The second problem is statistical. If you're gonna apply negative modifiers, you'd better be very careful because a very modest negative net modifier will drop the success chance to very low levels. In the 2d6 system, a -2 net modifier reduces the chance of success from 41% to 16%. A -3 modifier drops it to 8%.

What this means is that the referee may find it impossible to challenge one character because to do so will make a moderately less capable character effectively useless.

The reason is that the SAME modifier can have profoundly difference effects on characters that are only a few points apart.

Example -- you have two players both with assault rifles. Both have Rifle-2, but A has a DEX of 7, while B has a DEX of 8. A gets a +2 bonus to hit, while B gets a +4 to hit. In normal combat, A hits most of the time (72%), B hits virtually all of the time (92%).

You tire of this, and decide to create some drama in combat. You decide that they'll face someone who's gonna be tough to kill (though certainly not invincible). So, you have the next combat occur in a driving hailstorm at dusk. You decide that you'd like to reduce B to less than 50% chance of hitting, so you apply a -4 modifier.

B now hits 41% of the time. But poor A now only hits 17%

See the problem? The same penalty can have grossly disproportionate effects on different players that are relatively close in ability.

With a modest gap in proficiency, A guy is essentially rendered ineffective, while B guy is still effective. Should a 2 point difference be *that* significant?

In a 2d6 system, it is.

And the numbers don't get a whole lot better in a 3d6 system. That's why GURPS has to have defense rolls -- it's a trivial task to get a skill of 12 (74%) or 12 (84%) and hit most of the time. Of course, the same is true of defense rolls, which explains the clumsy rule that the defender must make his defense roll by more than the attacker makes his attack roll.

It works, after a fashion. But a better solution would have been to design a better system. By the way, GURPS works well with a d20 instead of 3d6. All of a sudden, high skill levels actually matter.
 
Originally posted by Anthony:
</font><blockquote>quote:</font><hr />Originally posted by tbeard1999:
If you're gonna use 2d6 (8+ being success), for instance, there should be a very small range of likely modifiers, because a mere +3 assures that you will succeed a great deal of the time (83%). Similarly, a -2 will assure that you will fail most of the time (83%).
In reality, there are plenty of tasks where a novice has a very small chance of success, and an expert is nearly guaranteed success. There are also plenty of tasks where the novice has no chance, and the expert has a lot of trouble, or where a novice can do it easily and an expert has no meaningful chance of failure. You just have to accept that some characters will be much better at specific tasks than others. </font>[/QUOTE]Oh I agree. I don't object to characters being good at some things.

My problem is what happens if the range of modifiers exceeds what the system can handle.

The most noticeable consequence in CT is that it is easy for almost *any* character to hit nearly all the time, if he chooses automatic weapons. And it's not terribly difficult to hit almost all the time with civilian weapons either. The same is true of NPCs, by the way. The reason is that it is too easy to get statistically significant bonuses. And while a clever referee can impose penalties to mitigate this, I resent being forced to do so.

And as noted before, bell curves reward relatively modest net modifiers and make really high net modifiers of minimal extra value. On a 2d6 system like CT, there's just not much statistical difference between a +3 (83% chance of success) and a +6 (100% chance of success). However there *is* a significant difference between no bonus (41%) and a +2 (72%).

And in combat particularly, it's very easy to wind up with net modifiers of 4+. With Book 4 weaponry, it's nearly impossible to avoid.

So my critique boils down to 2 things:

1. A bell curve system is far more granular than it first appears, due to the fact that net modifiers beyond a very modest amount produce little additional real world benefit. In the case of CT, a +3 makes success nearly automatic (83%).

2. This means that applicable modifiers must be extremely limited both in quantity and in size, unless you happen to like nearly automatic success or nearly automatic failure being common in games. I don't, by the way.

IMHO, these flaws cast serious doubts on the purported usefulness of bell curve systems in RPGs.

For a very simple example, consider a 2d6 system with 8+ being a success vs a 1d12 system with 8+ being a success. In both systems, an unmodified roll will succeed 41% of the time. However, look at how positive modifiers affect the chance of success (first is success chance in 2d6 system, second is success chance in 1d12 system).

+0 - 41% vs 41%
+1 - 58% vs 50%
+2 - 72% vs 58%
+3 - 83% vs 67%
+4 - 91% vs 75%
+5 - 97% vs 83%
+6 - 100% vs 92%
+7 - 100% vs 100%

See the difference? It only takes a +2 to get to a high chance of success (72%) in the 2d6 system, while it takes a +4 to get there in the 1d12 system.

Here are the success chances with negative modifiers:

-0 - 41% vs 41%
-1 - 28% vs 33%
-2 - 17% vs 25%
-3 - 8% vs 17%
-4 - 3% vs 8%

In other words, the d12 can accomodate a somewhat greater range of modifiers than the 2d6 system can -- about 50% more. (Not advocating a d12 based system for CT; just using it as an illustration).

This fact is neither good nor bad, but it has profound implications for the game designer -- a 2d6 system cannot handle a lot of modifiers nor can it handle modifiers that are much more than +1. Unless the designer is willing for virtually automatically success or failure to characterize the game.

Some have said, "well, the referee can just create some negative modifiers to offset high bonuses".

Even stipulating that this is so, there are 2 problems with this idea.

The first is that a good referee can make up for a lot of flaws in a game design, but this shouldn't excuse louse game design. Why should the referee be *forced* to do so?

The second problem is statistical. If you're gonna apply negative modifiers, you'd better be very careful because a very modest negative net modifier will drop the success chance to very low levels. In the 2d6 system, a -2 net modifier reduces the chance of success from 41% to 16%. A -3 modifier drops it to 8%.

What this means is that the referee may find it impossible to challenge one character because to do so will make a moderately less capable character effectively useless.

The reason is that the SAME modifier can have profoundly difference effects on characters that are only a few points apart.

Example -- you have two players both with assault rifles. Both have Rifle-2, but A has a DEX of 7, while B has a DEX of 8. A gets a +2 bonus to hit, while B gets a +4 to hit. In normal combat, A hits most of the time (72%), B hits virtually all of the time (92%).

You tire of this, and decide to create some drama in combat. You decide that they'll face someone who's gonna be tough to kill (though certainly not invincible). So, you have the next combat occur in a driving hailstorm at dusk. You decide that you'd like to reduce B to less than 50% chance of hitting, so you apply a -4 modifier.

B now hits 41% of the time. But poor A now only hits 17%

See the problem? The same penalty can have grossly disproportionate effects on different players that are relatively close in ability.

With a modest gap in proficiency, A guy is essentially rendered ineffective, while B guy is still effective. Should a 2 point difference be *that* significant?

In a 2d6 system, it is.

And the numbers don't get a whole lot better in a 3d6 system. That's why GURPS has to have defense rolls -- it's a trivial task to get a skill of 12 (74%) or 12 (84%) and hit most of the time. Of course, the same is true of defense rolls, which explains the clumsy rule that the defender must make his defense roll by more than the attacker makes his attack roll.

It works, after a fashion. But a better solution would have been to design a better system. By the way, GURPS works well with a d20 instead of 3d6. All of a sudden, high skill levels actually matter.
 
Originally posted by tbeard1999:
The most noticeable consequence in CT is that it is easy for almost *any* character to hit nearly all the time, if he chooses automatic weapons.
That's not because of the nature of the 2d6 curve. It's because the base difficulty of shooting people is too low.

2. This means that applicable modifiers must be extremely limited both in quantity and in size, unless you happen to like nearly automatic success or nearly automatic failure being common in games. I don't, by the way.
Tasks come in a variety of difficulties. For any given character, some tasks will be nearly automatic, some tasks will result in nearly automatic failure -- the difference is which tasks those are.
 
Originally posted by tbeard1999:
The most noticeable consequence in CT is that it is easy for almost *any* character to hit nearly all the time, if he chooses automatic weapons.
That's not because of the nature of the 2d6 curve. It's because the base difficulty of shooting people is too low.

2. This means that applicable modifiers must be extremely limited both in quantity and in size, unless you happen to like nearly automatic success or nearly automatic failure being common in games. I don't, by the way.
Tasks come in a variety of difficulties. For any given character, some tasks will be nearly automatic, some tasks will result in nearly automatic failure -- the difference is which tasks those are.
 
Originally posted by Anthony:
</font><blockquote>quote:</font><hr />Originally posted by tbeard1999:
The most noticeable consequence in CT is that it is easy for almost *any* character to hit nearly all the time, if he chooses automatic weapons.
That's not because of the nature of the 2d6 curve. It's because the base difficulty of shooting people is too low.
</font>[/QUOTE]


Did you mean "it's too easy to shoot people?"


If so, I disagree. A 41% base chance of hitting doesn't sound unreasonable to me.

I think that the reason it's easy to hit most of the time is that the positive modifiers are too large and too easy to get. For instance, the CT charts have a +4 shift between firing a single shot and a 4 shot burst. Isolating just this single factor will raise the to hit chance from (41%) to effectively automatic (92%). And remember, a hit also defeats armor.

And there isn't much room in the 2d6 curve to make shooting much harder. Reducing the to hit by only 1 will drop you to a 28% chance of success. Even a modest negative modifier will make you miss almost all the time.

That fact is under appreciated, I think. If you shift the success number upwards, even modest penalties will make it effectively impossible to succeed.



2. This means that applicable modifiers must be extremely limited both in quantity and in size, unless you happen to like nearly automatic success or nearly automatic failure being common in games. I don't, by the way.

Tasks come in a variety of difficulties. For any given character, some tasks will be nearly automatic, some tasks will result in nearly automatic failure -- the difference is which tasks those are.
I'm sorry, but I fail to see your point, at least as it applies to this discussion. Of course, some characters will be more skilled than others and of course somne tasks will be harder. My objection is not with these ubiquitous facts.

Rather, I'm merely noting that a 2d6 system (or a 3d6 system) is far more granular than its proponents seem to think. And such systems are easy to break if the game designer creates a modifier regime that makes large net modifiers relatively common.
 
Originally posted by Anthony:
</font><blockquote>quote:</font><hr />Originally posted by tbeard1999:
The most noticeable consequence in CT is that it is easy for almost *any* character to hit nearly all the time, if he chooses automatic weapons.
That's not because of the nature of the 2d6 curve. It's because the base difficulty of shooting people is too low.
</font>[/QUOTE]


Did you mean "it's too easy to shoot people?"


If so, I disagree. A 41% base chance of hitting doesn't sound unreasonable to me.

I think that the reason it's easy to hit most of the time is that the positive modifiers are too large and too easy to get. For instance, the CT charts have a +4 shift between firing a single shot and a 4 shot burst. Isolating just this single factor will raise the to hit chance from (41%) to effectively automatic (92%). And remember, a hit also defeats armor.

And there isn't much room in the 2d6 curve to make shooting much harder. Reducing the to hit by only 1 will drop you to a 28% chance of success. Even a modest negative modifier will make you miss almost all the time.

That fact is under appreciated, I think. If you shift the success number upwards, even modest penalties will make it effectively impossible to succeed.



2. This means that applicable modifiers must be extremely limited both in quantity and in size, unless you happen to like nearly automatic success or nearly automatic failure being common in games. I don't, by the way.

Tasks come in a variety of difficulties. For any given character, some tasks will be nearly automatic, some tasks will result in nearly automatic failure -- the difference is which tasks those are.
I'm sorry, but I fail to see your point, at least as it applies to this discussion. Of course, some characters will be more skilled than others and of course somne tasks will be harder. My objection is not with these ubiquitous facts.

Rather, I'm merely noting that a 2d6 system (or a 3d6 system) is far more granular than its proponents seem to think. And such systems are easy to break if the game designer creates a modifier regime that makes large net modifiers relatively common.
 
Tbeard, you know, with my simple CT combat system, I've basically done two things:

-1- I've removed the DM for armor from the to-hit throw. This elimiates many of the positive modifiers CT characters may have when hitting (as when attacking a target not armored).

Those armor DMs now modifiy the damage throw.

-2- I've added a negative DM based on the target's speed, either +0 DM No Movement; -1 DM for Walking Target; and -2 DM for Running Target.

These Speed penalties can be combined with Evasion. (And this makes up for a lot of the benefit we had on the Armor table for "negative" DMs to penetrate many types of armor.)




So what's the net result?

I've made it harder to hit by removing a type of beneficial DM and adding a type of penalty DM.

Doesn't that suit your quest for a non-broken 2D6 system?





A charcter (DEX-7, Skill-2) uses an AutoPistol, firing it at an unarmored target running away from the attacker at Short Range.

In vanilla CT: 2D +1 No Armor; +2 Short Range; +2 Skill.

Hits on a 3+, with Damage of 3D.




Under my system: 2D +2 Short Range; +2 Skill; -2 Running Target.

Hits on a 6+, with Damage o 3D +1.




Doesn't that already do what you're trying to accomplish?
 
Tbeard, you know, with my simple CT combat system, I've basically done two things:

-1- I've removed the DM for armor from the to-hit throw. This elimiates many of the positive modifiers CT characters may have when hitting (as when attacking a target not armored).

Those armor DMs now modifiy the damage throw.

-2- I've added a negative DM based on the target's speed, either +0 DM No Movement; -1 DM for Walking Target; and -2 DM for Running Target.

These Speed penalties can be combined with Evasion. (And this makes up for a lot of the benefit we had on the Armor table for "negative" DMs to penetrate many types of armor.)




So what's the net result?

I've made it harder to hit by removing a type of beneficial DM and adding a type of penalty DM.

Doesn't that suit your quest for a non-broken 2D6 system?





A charcter (DEX-7, Skill-2) uses an AutoPistol, firing it at an unarmored target running away from the attacker at Short Range.

In vanilla CT: 2D +1 No Armor; +2 Short Range; +2 Skill.

Hits on a 3+, with Damage of 3D.




Under my system: 2D +2 Short Range; +2 Skill; -2 Running Target.

Hits on a 6+, with Damage o 3D +1.




Doesn't that already do what you're trying to accomplish?
 
Originally posted by Supplement Four:
</font><blockquote>quote:</font><hr />Originally posted by TheEngineer:
The max skill level rule is still valid.
Getting older myself, I even have no problem with playing "saturated" characters, which have to drop one skill-level in order to get another new one.
The problem I have with this is that it's strictly a game mechanic. It has not "logic" in the real world, because in the rw, people tend to learn more and more things as their life goes on. They don't reach a certain level and stop....especially if they're 26 years old and not 86 suffering from Alzheimers (sp?). </font>[/QUOTE]Hm, I slightly disagree

Well, I agree that a 26 years old person still has potential.
But every being has its mental capacity and sadly its true, that learning capabilities beyond the 40s are reduced. And you start to forget things. I'm 38 now and .. what did I talk about...?

In fact, skills adapt to the actual circumstances of life.
So I indeed would not say "you reach a certain level and stop", but "you reach a lifetime of skill selection".

In this sense I consider the Traveller skill system as far more realistic as any other of those ever advancing skill databases concepts.

regards,

TE
 
Originally posted by Supplement Four:
</font><blockquote>quote:</font><hr />Originally posted by TheEngineer:
The max skill level rule is still valid.
Getting older myself, I even have no problem with playing "saturated" characters, which have to drop one skill-level in order to get another new one.
The problem I have with this is that it's strictly a game mechanic. It has not "logic" in the real world, because in the rw, people tend to learn more and more things as their life goes on. They don't reach a certain level and stop....especially if they're 26 years old and not 86 suffering from Alzheimers (sp?). </font>[/QUOTE]Hm, I slightly disagree

Well, I agree that a 26 years old person still has potential.
But every being has its mental capacity and sadly its true, that learning capabilities beyond the 40s are reduced. And you start to forget things. I'm 38 now and .. what did I talk about...?

In fact, skills adapt to the actual circumstances of life.
So I indeed would not say "you reach a certain level and stop", but "you reach a lifetime of skill selection".

In this sense I consider the Traveller skill system as far more realistic as any other of those ever advancing skill databases concepts.

regards,

TE
 
Originally posted by tbeard1999:

Did you mean "it's too easy to shoot people?"

If so, I disagree. A 41% base chance of hitting doesn't sound unreasonable to me.
Real-world statistics for poorly trained people, at short ranges, are under 10%. Requiring a 10-12 to hit is perfectly realistic.

That said, certain modifiers (such as autofire) may be excessively large. The typical goal of burst fire is a doubling in hit probability under poor conditions, which is really only a +1 or +2.

That fact is under appreciated, I think. If you shift the success number upwards, even modest penalties will make it effectively impossible to succeed.
I have no problem with unskilled people being incompetent.

</font><blockquote>quote:</font><hr />Tasks come in a variety of difficulties. For any given character, some tasks will be nearly automatic, some tasks will result in nearly automatic failure -- the difference is which tasks those are.
I'm sorry, but I fail to see your point, at least as it applies to this discussion. Of course, some characters will be more skilled than others and of course somne tasks will be harder. My objection is not with these ubiquitous facts.

Rather, I'm merely noting that a 2d6 system (or a 3d6 system) is far more granular than its proponents seem to think. And such systems are easy to break if the game designer creates a modifier regime that makes large net modifiers relatively common.
</font>[/QUOTE]A 2d6 or 3d6 system is quite granular, but my point was that there's nothing wrong with that, because human skill levels vary by a lot
 
Back
Top