Ati says | ||||
Now this is just a weird thought that occured to me, but I think it does have some profound philisophical implications. Is there an optical level of radical thinking in science? If we define radical thinking as thinking that challenges established logic and precepts, then we can see that two much or two little are bad things. If there is too little and people just go along with what has been said before, it leads to things like the stagnation that occured just before the discovery of quantum particles. If there is too much and people reject valid information that has been gained before (as with the crazier bits of the alternative medicine thing), it leads to a stream of negative progress. It seems apparent that there is some middle ground, some optimal level that would lead to maximum scientific progress. So, my thought can be rephrased as this: Is there an optimal level of precept challenging thought that can go on in the scientific community? Can we find it? What effect would it have on the pace of progress if we did? Just something to think about. |
||||
Total Topic Karma: 24 | - More by this Author |
Nadeem says |
|
|||||
IMHO, it would have little effect on the pace of progress, because finding it wouldn't help people to practice radical thought. So things would go on exactly the same way as before, with the same proportion of people thinking radical things. On a slightly related note, I've been wondering about the role of simplicity and elegance in science and math in general. This was mostly catalyzed by my having to read a bunch of CS papers over the last couple of weeks. I've noticed that a lot of the really good ideas, ranging from the ones that cause paradigm shifts and create new fields to the ones that merely open up a new avenue in some discipline that will remain mostly unknown to the rest of the world forever, are astonishingly simple and direct. I don't know if this is some kind of hindsight effect, or those ideas really are simple and were somehow missed before. And naturally, I'm trying to figure out how to detect such ideas, or learn how to come up with them myself. |
||||||
- 09 February, 2007 |
Ati says |
|
|||||
I've noticed that too. The best ideas do tend to be the simple ones. To paraphrase Einstein: 'If you cannot explain your theory easily to a five year old child, it is worthless.' I think it is because the ideas that we tend to consider the 'good ideas' (the ones that are uncomplicated to impliment, and are the most useful) are the simple ones, because complicating factors make them less useful and more difficult to impliment. Thus, by definition, the simplest ideas tend to be the best. |
||||||
- 09 February, 2007 |
Nadeem says |
|
|||||
Ironically, I have to present a paper to my Machine Learning class next week, the major premise of which is the opposite - the author complains that the principle of Occam's Razor is being terribly misused in machine learning and knowledge discovery systems, and produces theoretical arguments and empirical evidence proving his point pretty convincingly. | ||||||
- 09 February, 2007 |
Ati says |
|
|||||
Well, that may be true - Occam's razor works because in terms of predicting the behavior of a system (in this case, the universe at large), the simplest explanations do tend to be best. If your BUILDING something, instead of merely predicting the behavior of something that already exists though, that's a whole different story. In that case, complexity is often beneficial, and there is no reason that a simpler system will do the job any better than a complex one. |
||||||
- 09 February, 2007 |
3daddict says |
|
|||||
I disagree, reducing complexity lessens the potential for something going wrong in the chain of sequences. The shortest distant between two points lessens the number of variables that can go awry and increases consistency. But we can't always find that route first. | ||||||
- 11 February, 2007 |
Constantine says |
|
|||||
I think some radical thought is essential to keep things moving, even if your theory proves wrong it may provide insight into another field or give you an idea for a further line of inquiry to follow. In essence I vote in favor of curiosity above all things. |
||||||
- 12 February, 2007 |
Ati says |
|
|||||
I agree, but a level of conservatvism is required, or else you'll never make any progress, because you'll constantly be re-testing things that hav been proven before. | ||||||
- 12 February, 2007 |
p0ss says |
|
|||||
Ati, i think every scientists needs a little of both. There should not be some minority group of out there of scientist sprouting nutty ideas all the time, while the majority sit around in lab coats drinking tea. Every scientist must be conservative enough to accept proven theories and evidence as it stands, but radical enough to extrapolate unusual ideas from the evidence provided and challenge theories when required. As for the optimal level, it is merely a matter of remaining constantly vigilant, one must be constantly re-examining ones own beliefs. Nothing we know can be proved to be objectively true, so we must be warey of the trap of complacency and sureity. |
||||||
- 12 February, 2007 |
Ati says |
|
|||||
Very true. Keep working, but always maintain a level of self-doubt. | ||||||
- 12 February, 2007 |
Rathmaster says |
|
|||||
*cough* yeah ati, and i know how good you are at THAT... im personally not a scientist, more of a gamer. i dont really HAVE goals. exept getting laid. |
||||||
- 13 February, 2007 |
p0ss says |
|
|||||
ah, so your not a "gamer, so much as a "playa" | ||||||
- 13 February, 2007 |
RyeGye24 says |
|
|||||
You can always find the answer here | ||||||
- 14 February, 2007 |
Rathmaster says |
|
|||||
me? a player? the reason why getting laid is one of my goals is the same reason making a popular MMO is one of Ati's goals: its not very likely to happen any time soon, but we HOPE for it with all our might. a more short term goal is to watch every full metal alchemist episode. while getting laid. |
||||||
- 15 February, 2007 |
Ati says |
|
|||||
My long term goal is to become a god (not as implausible as it sounds). My short term goal isto get a decent Vr system up and running, which will help with the latter. |
||||||
- 15 February, 2007 |
Troll says |
|
|||||
Radical thought is the tool of the devil. | ||||||
- 16 February, 2007 |
Ati says |
|
|||||
Oh god, not another one... Admin, could you please blank this one too? | ||||||
- 16 February, 2007 |
Nadeem says |
|
|||||
My long term goal is to become a god (not as implausible as it sounds). Wow, that's precisely what my long-term goal is too. Looks like I'll have company in the post-deification era. |
||||||
- 16 February, 2007 |
Ati says |
|
|||||
Hmmm... Tell you what; I'll meet you for coffee on the tip of olympus mons at 12:00 noon on February 16, 4007. Got anything planned? |
||||||
- 16 February, 2007 |
Rathmaster says |
|
|||||
dude, my balls ice up if i go there... i have to chill in hell | ||||||
- 16 February, 2007 |
Nadeem says |
|
|||||
Nah, nothing planned for that particular day, at least. Assuming nothing comes up, I'll be there. | ||||||
- 16 February, 2007 |
Ati says |
|
|||||
Also, just for the heck of it, shall we agree to apply a temporary censor to the previous 2000 years worth of memories and revert to an orthohuman neuroform for the occasion? That way, it'll seem to us as if we've been suddenly teleported into the future. This should be interesting. |
||||||
- 16 February, 2007 |
Nadeem says |
|
|||||
Okay, but I suggest leaving a hidden overself in place to make sure we don't do anything stupid. | ||||||
- 16 February, 2007 |
Ati says |
|
|||||
Well, obviously we'd want to leave a secondary copy of us primary around, but we'll desynchronise the version of ourself having coffee for the duration. Now its simply a matter of remembering this appointment for 2000 years. And the whole becoming a god thing. |
||||||
- 16 February, 2007 |
Nadeem says |
|
|||||
Well, we have two millennia. If I'm still around, and I'm not a rough approximation to a god, I'd be ashamed of myself. | ||||||
- 16 February, 2007 |
Ati says |
|
|||||
As would I. I figure if I can make another 200 years or so I've got it made. Incidentally, I'm entirely seriosu about this, If I'm alive at the time and able, I'll be there. |
||||||
- 16 February, 2007 |
Nadeem says |
|
|||||
Yeah, same here. What's the point of being a god if you can't do all these arbitrary things anyway? | ||||||
- 16 February, 2007 |
Ati says |
|
|||||
I quite agree. Actually, if you look at the minimum bar for godhood, you'll probably find yourself looking at the greek gods, who actually aren't all that powerful. I mean, immortality is basically guaranteed once uploading occurs. They aren't even impervious to injury for crying out loud, which you could very well be if you replaced the skin of your avatars with spiders silk or kevlar. They also have the power to inspire mortals with courage or cowardice, which could be done through manipulation of their state vectors. The only other powers they seem to have after that is the ability to shoot lightning and fire, which can be accomplished with some simple nano-tech. My chances at god-hood are looking up. |
||||||
- 17 February, 2007 |
Nadeem says |
|
|||||
Ever read Ilium and Olympos by Dan Simmons? The greek gods turn up in a posthuman setting, and you find out just how they inspired the greek heros - with nanotech and force fields. | ||||||
- 17 February, 2007 |
RyeGye24 says |
|
|||||
ya know, another way to become a god is by being a mormon | ||||||
- 17 February, 2007 |
Ati says |
|
|||||
Huh. Thats interesting Rye. Nadeem, No, I haven't read it but it sounds interesting. I actually had a similar idea myself a few years ago of trying to write a story in which the greek god used merely modern ttechnology, but it looks interesting. I worked out a rating systtem for organisms earlier today: Subhuman (IQ being greater than 0 and less than 50, with exception to idiot savants) Human (IQ being greater than 50 and lower than 200) Weakly superhuman (IQ being greater than 200 and lower than 300) Superhuman (IQ being greater than 300 and less than 500) Weakly godlike (Total mental faculties being equivalent to greater than 200 average human minds, and less than 1000. Also implies limited access to nano-technology. Godlike (Total mental faculties being greater than 1000 average human minds and less than 5000, also implies unrestricted access to nano-technology.) Transcendant (Anything over 5000) Since my IQ is 130 and rising steadily, I'm on the high end of the human scale, and if a couple of my ideas work out I might make weakly super-human in a decade. Whats your opinion on this rating scale? |
||||||
- 17 February, 2007 |
Nadeem says |
|
|||||
Just wondering how your IQ is rising. Usually it decreases with age, given that you have to divide by your physical age. I'm not a big fan of IQ as a measure of intelligence. Trying to reduce a multifaceted ability like intelligence to a single number is an extreme oversimplification. Even as a heuristic, it isn't all that useful, except when you're distinguishing between people with an extremely large IQ difference. |
||||||
- 17 February, 2007 |
Ati says |
|
|||||
True, but most intelligence rating systems have that flaw. I might have to work out differeing criteria, but this'll do for now. As for my IQ rising, I am simply assuming current trends will continue: a year and a half a go my IQ was 105. Six months ago my IQ was 123. Today, my IQ is 131. Which means that my ability to process information is increasing faster than division by age. I suppose there really is no way to compress all of the qualities of intelligence into a single number; a true intelligence measure would have to be a lengthy report. |
||||||
- 17 February, 2007 |
p0ss says |
|
|||||
"I mean, immortality is basically guaranteed once uploading occurs" *cheekily tugs on Ati's power cord* |
||||||
- 18 February, 2007 |
p0ss says |
|
|||||
I was under the impression that Intelligence quotients were limited to 200, 100 being the average, If any being became sufficently intelligent to be pushing the 200 barrier it would simply drive the requirement for the average up. I could be wrong, it has happened before |
||||||
- 18 February, 2007 |
Ati says |
|
|||||
P0ss, I may have heard something about that; if that is the case, then I may have to re-work the criteria. As for tugging on my power cord, I have taken precautions designing my neuroform to present exactly this sort of thing from happening, The way I've set up my hypothetical neuroform is on four levels. On the top level there is me Primary, who maintains my original personality and all memories sustained my all instances of myself, but with processing capacity expanded 100 fold. Full privileges. On the level beneath that there are the Specials, where I keep all of my modified and exotic neuroforms. Their processing spae is multiplied around ten-fold, and their primary job is to guid and instruct the lesser me's. Enhanced privileges. Then there are the Standards, which are my original personality and processing power, but with all memories of all instances of myself. Standard privileges. Then there are the Partials, who come in three classes. First class maintains continuity of identity. Second class is self aware, but does not maintain continuity of identity. Third class is not self aware. Limited Privileges. Each of these versions of myself are created, then go about their business on the local computer system until they are done, then resynchronize with me Primary and are deleted. The entire system is decentralized, and stored on a variety of computer systems. So, you could disconnect one of the servers, but it would require a real effort of will to track down every computer I have a portion of myself located on and destroy it. |
||||||
- 18 February, 2007 |
p0ss says |
|
|||||
servers? shit, if your a nanobot god, why not just exist as pure energy? |
||||||
- 18 February, 2007 |
Nadeem says |
|
|||||
I prefer the style of the Solid State Entity from David Zindell's Requiem for Homo Sapiens trilogy. Millions of planetary size computational devices spread out over thousands of cubic light years, communicating in unknown ways. Even the underlying topological structure of space is fundamentally different. Imagine trying to destroy that. |
||||||
- 18 February, 2007 |
Ati says |
|
|||||
Well, energy computational constructs are notoriously unstable. All it takes is one small introduction of energy to overbalance the sytem and destroy the data stored on it, I think that large computers are also going to be unneccesary for some time; given the rapid exponential increase of quantum computers, and the considerably slower increase of human processor needs I find it likely that it will be a very long time before any major re-engineering of planet-mass computational bodies will be required. And when I was talking about servers, I was referring to the relatively near future. |
||||||
- 18 February, 2007 |
|