List of evaders

The following document is a summary of sum of the people who quit discussion on FI and associated lists, such as the BoI list. In cases where information on the person’s activities off the list was found a summary of that activity was given.

Ron Garret

ron@flownet.com

Interest: David Deutsch

Garret had a blog post discussing David Deutsch’s book “The Fabric of Reality” and criticising chapter 2 of the book. https://blog.rongarret.info/2015/03/why-some-assumptions-are-better-than.html

Elliot offered to criticise Garret’s post. Garret posted on FI between 13 and 15 May 2019. During the discussion he posted misquotes. When posters criticised him for misquoting he stopped posting. A quote from Garret’s last post:

https://groups.google.com/d/msg/fallible-ideas/r9KDPJUwg88/QeMz7sJGAAAJ

>> You have misquoted people and said you’re careless. These are substantive problems. If you think they’re not substantive problems that in itself is a substantive disagreement. And for reasons pointed out above, these problems would make discussion of other problems more difficult. So fixing your carelessness and misquoting is more important int eh current context than your views on other topics.

>

> I didn’t misquote *people*, I misquoted *myself*.  And the “misquote” was substituting the word “dispute” for “deny” (or maybe it was the other way around, I don’t recall, and I don’t feel like looking it up).  If you think the difference between “dispute” and “deny” is substantive, well, we’ll just have to agree to disagree about that.  I intended them to be synonyms.

>

>> We’re having a substantive disagreement about the importance of accurate quoting.

>

> No, we are having a stupid disagreement about whether or not “deny” and “dispute” mean the same thing.  I neither deny nor dispute that accurate quoting is important in general.  I deny and dispute that it matters in this particular instance.

>

> You know what?  This is a waste of time.  I’m done.

Fred Welf

fwelfar@gmail.com

Interest: David Deutch

Well posted to the Beginning of Infinity Group about liberalism between 13 and 15 June 2017 claiming that liberalism led to bad policies. His posts were criticised and he stopped posting.

https://groups.google.com/d/msg/beginning-of-infinity/jCRJad0SNWY/7a_q0kPdAAAJ

A quote from Welf’s last post:

> These are some of the nonsense consequences which were directly stated in my initial post. I wonder if you decided they were not stated as if you had read carefully or were blinded by your own overreaction to the first few sentences!!

Welf has posted some material about banks on the internet without discussing it with FI or the BoI group, see

https://tc.academia.edu/FredWelf

David Winslow

Davidwin@TDS.net

Interest: Objectivism

David Winslow can to the Fallible Ideas list from the Harry Binswanger list.

A quote from one of Winslow’s posts:

https://groups.yahoo.com/neo/groups/fallible-ideas/conversations/messages/19334

>>The combination here of there being a purpose to believing

>>something exists (explains sightings of apple trees, explains how apples are

>>produced, etc. and also no criticism of claiming it exists is adequate to say

>>it exists.

>

> Your consensus methodology has little to do with either a scientific or

philosophical proof.

>

>> Existence is covered in more detail in DD’sbooks. http://beginningofinfinity.com/books

>

> I have no interest in hearsay.

Blue Yogin

blue.yogin@gmail.com

Interest: behavioural genetics

Blue Yogin posted to the FI list in December 2013 and January 2014. He tried to promote human behavioural genetics. A quote from one of his posts:

https://groups.google.com/d/msg/fallible-ideas/8_RLCxNQ_LI/EwgBx3JIcacJ

>>> Hi. I am the original asker of this question, and I’m new to the list. Happy to be here. Some of the context was lost here – we had been talking about the heritability upper and lower bounds for some common phenotypes, chief among them intelligence. (Or, if you prefer, “the ability to do well on IQ tests”). Whenever I talk about this I try to switch to sports analogies as quickly as possible. I think we have better intuitions about athletic ability than about mental ability, and they have the same basic statistical properties, so it’s easy to switch between them.

>>

>> So your approach to the subject is to stop considering it, and start arguing by analogy, as quickly as possible?

>

> Yes.

>

>>> To make this question harder and to avoid sneaking away from the pain I’m trying to inflict, allow me to add another proviso.  Suppose that any good idea in football can be quickly copied by other teams.

>>

>> How will supposing something vague, and seemingly false, get us anywhere?

>>

>>> Thus, though individual coaches might have some good ideas about how to train better or play better, whenever a good idea like that is shown to be successful, it quickly sweeps the entire league, and the game returns to an equilibrium state in which every team employs the best training and playing strategies, so that the only difference between teams are physical differences that can’t be easily copied.

>>

>> So you only want to approach the subject via *unrealistic* analogy?

>

> It’s not unrealistic. Strategies in football are in actual fact routinely copied whenever they work. Major league teams employ scouts whose full-time job is to figure out what training routines and play strategies the other teams are practicing. Trainers and coaches routinely switch from team to team and cross-pollinate their institutional knowledge. Whenever a team loses to a significantly new strategy, everyone in the entire league studies that play on their iPads in the week after the game and either adopts or develops a counter strategy.

One day he just stopped replying to posts.

Tom Robinson

tmt637@googlemail.com

Interest: TCS

Tom Robinson vaguely hung around the TCS/ FI scene for years.

His last post in March 2015 rejected FI without offering any way to make progress on disagreements:

https://groups.google.com/d/msg/beginning-of-infinity/fRVLfpN8jDc/IgNgCaWwpW4J

>> I watched the first 10 seconds and it looks super annoying to watch and really trendy, and screams unseriousness. Which fits with the 2 million views.

>>  […]

>> If you’re guessing why, instead of him knowing something more than an assertion, I’m not impressed.

>>  […]

>> it doesn’t matter if he’s the 10th best person in the world, it wouldn’t be good enough.

>

> When I wrote that post I was interested in memes and writing mainly for my own benefit. By comparison, issues about how to make videos, who is impressed and who is the best at philosophy are boring and off-topic. They don’t lie at The Beginning of Infinity. To paraphrase Ayn Rand, the proper business of man is the conquest of nature, not the conquest of other men.

Richard Crawley

crawleyfiesta@gmail.com

Richard posted from late May 2018 to September 2018. One of Crawley’s friends informed him of the list cuz Crawley was interested in addiction. He stopped replying to messages without saying why he stopped. A quote from his last message:

https://groups.google.com/d/msg/fallible-ideas/2HlwpBFm4ME/TMmNRKVkCQAJ

>> Minimum wage laws are a price control on the price of labor. Any comment on those?

>

> I have no problem with a minimum wage. I’m not sure I’s call it the

“price of labour” in that way as that sounds like work is a commodity to

be sold, like a tin of beans, whereas in fact the operation is the other

way round.

>

> The minimum wage was introduced to ensure that workers didn’t get ripped

off. Unfortunately, the labour market isn’t such that if you don’t like

one employer you can just go and get another; despite what the

governments tell us, there are not swathes of companies desparate for

employees.

King of Kings

kingofkings_woodz@hotmail.com

This person posted two messages and didn’t reply to criticisms at all.

A representative quote from one of his messages:

https://groups.yahoo.com/neo/groups/fallible-ideas/conversations/topics/1655

> We become the caretakers and defenders of all life as we know it thus far. We knw life is always at risk on the micro scale here on earth one species to the next. Whether they be plants, animal, or cell life. All life.

Kevin Vollmer

work.kvollmer@gmail.com

Kevin posted twice in September 2013, asking about how to deal with criticism. A representative quote:

https://groups.yahoo.com/neo/groups/fallible-ideas/conversations/topics/1340

>>> When you come up with a new idea, the first thing you would do is consider if this actually solves the problem you have. If your idea doesn’t solve the problem, you can always think some more about why it doesn’t and make alterations to it until it does.

>>>

>>> Suppose you think the idea might solve the problem. Do you then look for external criticisms to the idea? Or do you come up with your own criticisms of the idea to see if it does indeed work?

>>

>> When you’re consciously thinking about an idea, you’ve already criticized it and rejected many variants. The idea couldn’t be any good otherwise. Criticism is necessary to get halfway decent ideas.

>>

>> People would typically call this “unconscious criticism” or maybe “subconscious criticism”. I don’t think most thinking on those topics is very good. Maybe a better concept would be: criticism that you weren’t paying active attention to. I don’t think that’s perfect either but it has advantages.

>>

>> So in this context, the issue is more like: there is a constant stream of internal criticism going on and at what point do you try to add in some external criticism? And at what point do you pay more active “conscious” attention to the internal criticism?

>>

>> There isn’t a one size fits all non-contextual answer to that. But as a rule of thumb, if you’re wondering whether to get an idea some external criticism, then the answer is that you should.

>

> What if I want external criticisms a lot? What if I keep thinking my ideas aren’t very good, and I should keep looking for more available  criticisms.

anurnimuss

anurnimuss@yahoo.com

anurnimuss posted a few messages to the FI group in February 2014 and then stopped. A short sample of his writing:

https://groups.yahoo.com/neo/groups/fallible-ideas/conversations/topics/2253

> Is there a rational way of deciding whether someone should be allowed to acquire control of a dangerous object that belongs to a particular class of destructive capability?

>

> I think that as a weapon or object becomes more dangerous to more people, we should become proportionately more critical of who has signaled an interest in taking control of it. This implies that we should scrutinize would-be knife owners less than gun owners, and gun owners less than bomb owners, and bomb owners less than missile owners, etc.

Matjaž Leonardis

Interest: TCS

sidranoel.zajtam@gmail.com

Leonardis posted on the BoI group from 2011 to 2012. He stopped posting with no explanation. A short sample of his writing:

https://groups.google.com/d/msg/beginning-of-infinity/cpi1SAqJEO0/jPSWqTeUmFgJ

>> Universality is when a particular solution solves all of the problems

>> in a given domain. So, for example, the Arabic numeral system is

>> universal for doing addition, subtraction and multiplication with

>> positive integers. By contrast, tally marks are useless for doing

>> those things except for very small numbers.

>

> But you can do multiplication, addition and subtraction between positive integers with tally marks.

>

> When you add you concatenate the two strings together, when you subtract you shorten the first string by the length of the second, and when you multiply you concatenate one copy of the first string for each tally mark in the second.

>

> There is no reason why you can’t do this (in principle) with arbitrary large integers.

>

> It is true that the Arabic system is way more efficient but even it becomes useless for sufficiently large integers.

>

> So I’m not 100% clear what exactly is the domain where the Arabic system is universal and the tally mark system isn’t.

Leonardis posts on Twitter, including a post about a woman being fascinating based solely on a single photo:

https://twitter.com/MatjazLeonardis/status/1137422078590803968

Jude Stull

aspect_of_reality@yahoo.com

Jude posted to the FI group in June and July 2013 and stopped posting without any explanation. A sample of Jude’s writing:

https://groups.yahoo.com/neo/groups/fallible-ideas/conversations/messages/355

>> 2) instead of giving out direct contact info, have some public email list, discussion group, forum, or whatever, which you monitor. you do not have to read everything there. let other people discuss it. then look over the topics that get the most attention, that your allies don’t know how to fully answer, that your allies think have merit, and so on.

>

> what are “allies”?

>

> If we are truly in the business of constructing a greater truth by openly proffering and then vetting fallible ideas, why would we need “allies”?

>

> If allies reflexively side with you in the event of a debate with a non-allied outsider, wouldn’t this sully the potential of the debate to attain truth?

zynevam

zynevam@gmail.com

zynevam posted in the FI group from November 2011 to May 2018. A short sample of his writing:

https://groups.google.com/d/msg/fallible-ideas/mCSmFn87daU/pVMu_wRRBQAJ

>2017-05-26 22:25 GMT+01:00 Elliot Temple curi@curi.us [fallible-ideas]

<fallible-ideas@yahoogroups.com>:

>> On May 26, 2017, at 1:59 PM, Zyn Evam zynevam@gmail.com [fallible-ideas] <fallible-ideas@yahoogroups.com> wrote:

>>

>>> 2017-05-26 20:29 GMT+01:00 Elliot Temple curi@curi.us [fallible-ideas]

>>> <fallible-ideas@yahoogroups.com>:

>>>> On May 26, 2017, at 11:55 AM, Zyn Evam zynevam@gmail.com wrote:

>>>>

>>>>> 2017-05-25 23:54 GMT+01:00 Elliot Temple curi@curi.us wrote:

>>>>>> On May 25, 2017, at 3:45 PM, Zyn Evam zynevam@gmail.com wrote:

>>>>>>

>>>>>>> Is FI the best place to make progress in solving AI?

>>>>>>

>>>>>> FI is crucial for this, and for critical discussion generally.

>>>>>>

>>>>>> for AI to work it’ll have to be Popperian.

>>>>>

>>>>> This is what I would like to implement: popperian epistemology applied

>>>>> in the context of neural nets.

>>>>

>>>> Then my guess is you should learn way more epistemology.

>>>

>>> I guess that is so too.

>>>

>>>> Also you shouldn’t start with a preconceived notion that Popperian epistemology will apply to some existing AI programming paradigm like neural nets.

>>>

>>> Yes, that could be so. However in terms of knowledge representation I

>>> think neural nets work really well. The problem is knowledge creation,

>>> which neural nets do not really do at all. In supervised learning

>>> paradigms we feed all the knowledge to the nets, they do not create

>>> new knowledge, just good representations to do the particular tasks we

>>> want them to do. I haven’t done much in reinforement learning though.

>>> But still there we specify what we want the neural nets to learn. It

>>> is still us who specify which score the neural nets shall maximize.

>>

>> i think the really hard problem is criticism. until you figure out how to deal with criticism, you don’t know what knowledge representations are good. you need a knowledge representation that facilitates your approach to criticism.

>

> in supervised learning settings humans have to supply the information (criticism) which enables error correction. for instance in image recognition humans have to label images as belonging to different categories. in imagenet (http://image-net.org/challenges/LSVRC/2017/) for instance there are 1000 categories with 1000 examples each. a neural net starts with randomly initialized weights (analogous to synapses which will store knowledge) and receives raw pixel values (e.g. 0s and 1s) as inputs. as activation runs through its weights given an input sample it produces a random guess initially. then we have to tell the network if its guess has been correct or not (hence we need labels). based on this the network changes its weights so that next time its guess will be closer to the correct output. in the end we test it on 150,000 images it has never seen.

>

> all of this requires 0 human knowledge going into the design of the network (we initialize the weights randomly). the only point where human knowledge feeds into it is that we have to tell what is the correct output. the error correction afterward is general and automatic.

>

> the reason I mentioned knowledge representation is solved with neural nets, is that with the exact same method can be applied to any domain, such as speech recognition, text classification, odor classification, or whatever you can think of.

Larry Mason

mason@email.unc.edu

Larry Mason posted on the BoI list from April to May 2013. He stopped posting with no explanation. A sample of his writing:

https://groups.google.com/d/msg/beginning-of-infinity/SUUPbUWAnNw/hhhiJ9qhyioJ

>>> In my system when a luxury is bought, the money ceases to exist.  The producers of the luxury will receive some money soon (within a month?) for that benefit and may receive more later if other benefits become apparent later.

>>

>>Adding a month delay to many financial transactions would be an economic catastrophe. (Plus the uncertainty about how much you will be paid.)

>

> In my system, financial transactions are purchases of luxuries.  Hardly something that can cause an economic catastrophe.  But with physical object money financial transactions are the heart and soul of most economic catastrophes.

>

> In my system, only your luxury income is at risk.  With physical object money, how much you will be paid is not only unknown but if huge importance.   (Just ask those folks who don’t know whether they’ll have a job next month or those who have lost their jobs.)

>

> The more you write the more you make the case that physical object money economics is a horror show.

He has a website at https://nopomstuff.info/

Rafe Champion

rchamp@bigpond.net.au

Interest Karl Popper

Rafe Champion was a critical rationalist who posted on the FI group from June 2013 to October 2014. A sample of his writing:

https://groups.yahoo.com/neo/groups/fallible-ideas/conversations/messages/222

> A problem that is identified becomes a problem that can be worked on.

Taking one issue at a time and having the same struggle each time

suggests that he was not getting the STRUCTURE of the ideas so that

each one he mastered should have made it easier to take the next steps.

>

> An example would be in cooking where you don’t understand that the

different kinds of cooking – boiling, frying, baking are all based on a

similar process to get from a raw product to a cooked product by

applying heat in different ways.

>

> Similarly in languages where you try to learn without any grasp of

grammar that enables you to take something from one language to the next

(in the same linguistic family).

>

> And the problem of getting justificationists to see that the same

intractable problems turn up in each situation where they take that

approach rather than the alternative.

He left the FI group without giving an explanation. Elliot Temple discussed some of his problems in this email:

https://groups.google.com/d/msg/fallible-ideas/WPMaB2kAr8g/EXc_51ppQWQJ

His website is here

http://www.the-rathouse.com/

Damián Gil

damiangil@gmail.com

Gill was on the BoI list from July to August 2017 arguing against Yes or No Philosophy:

https://yesornophilosophy.com/

A sample of his writing:

https://groups.google.com/d/msg/beginning-of-infinity/qxmi02BkTiU/W5gMfAq9AQAJ

>> What do you care about “more degree of confidence” or “the more confidence you’ll have that the man is cheating and the die is not fair”? (These quotes are from the post included below.) Is it an imprecise statement about what bets you would and wouldn’t take?

>

> Remember I’m not a native english speaker. I don’t know if “what do you care about” is a slang phrase or something. Talking literally, what I care about is no one’s business. The relevant thing is that the degree of confidence can increase or decrease, not if I care much about it or not.

>

> The confidence someone has in a statement can be very imprecise, like in the case of a common man treating a difficult problem; or very precise, like in the case of a bayesian statistician treating a very simple problem, like the hypothetical die. A statistician can state his degree of confidence in a very precise, numerical way. For example, he can assume initially that the die is fair, and the proportion of sixes must be 1/6. Each time the die is rolled, the statistician can give you the exact posterior odds ratio. Say he calculates it after quite a lot of sixes and the result is 5:1. That means that, given his current knowledge (I insist that probability is subjective, it depends of the state of knowledge of the observer, so different observers can precisely calculate different probabilities for the same event) cheating is exactly five times more probable than innocence. But all of this is irrelevant. The degree of confidence in a statement can be vague or precise, but the point is that it _exists_. It’s not illogical to talk about it.

Logan Chipkin

chipkin.logan@gmail.com

Chicken posted a few times in March 2019 and stopped without giving any reason. A sample of his writing:

https://groups.google.com/d/msg/fallible-ideas/gWCrQqS2XBM/bPREYAC2BQAJ

>> A study from Harvard University found that, once contextual factors

>> are taken into account, no racial differences emerged in the data on

>> lethal shootings. As the author notes, “In the end, however, without

>> randomly assigning race, we have no definitive proof of

>> discrimination”.

>

> I don’t think one quote from an unnamed study, and a statement of the

conclusion, without a citation, is convincing enough. It’s not

presenting them with a bunch of counter evidence.

>

> So I thought the article hyped up data but then didn’t have a lot. Too

much setting the stage and conclusions, and too little of what I thought

would be the main course or meat of the article, IMO.

He has a twitter account where he has posted more recently https://twitter.com/chipkinlogan

Abraham Lewis

abrahamwl@gmail.com

Lewis posted on the FI list once in 2017. A sample of his writing:

https://groups.yahoo.com/neo/groups/fallible-ideas/conversations/messages/19868

>>> I was using understanding your ideas about marriage as an example and saying that in spite of the fact that it’s very important with lots of value to be had, it can still be rational and good that people don’t pursue it, because of opportunity cost.

>>

>> i’m not aware of any actual cases where the opportunity cost makes it not worth it. i’m sure you could invent some with e.g. a married couple who are both 105 years old and about to die of cancer, so there isn’t time to make progress and then benefit from that progress. but in normal situations for people with decades ahead of them, i don’t see the case that this stuff isn’t worth the cost.

>

> But that’s from the vantage point of someone who already accepts its

value. Other people are surrounded by people claiming to have ideas or

criticisms that will help them improve their marriage. Depending on

the person, it is highly likely that most wont. The opportunity cost

to pursue all of those avenues, let alone every other area in which

people are claiming to have good ideas that will improve them is very

high.

Bruce Nielson

brucenielson1@gmail.com

Nielson posted on the FI list from July to October 2018 and then stopped without explanation. A sample of his writing:

https://groups.yahoo.com/neo/groups/fallible-ideas/conversations/messages/28392

>>>> I don’t think that e.g. Marxism or environmentalism spread by

>>>> offering

>>>> value or by rational persuasion. They’re irrational movements which

>>>> pressure and manipulate people.

>>>

>>> Hmmm… I actually said “meaning” not “value.” I’m suggesting the Left

>>> is

>>> very good at creating meaning for people (like religion does). I’m not

>>> claiming that the meaning being created is valuable to anyone but

>>> (perhaps)

>>> the individual it created meaning for.

>>

>> Meaning is valuable to people. Even if it’s only valuable to one person,

>> that’s still offering value to that person, from his perspective.

>

> I’ve rolled this discussion back to before it went off the rails. It took me forever to even find the right spot to jump back in. I tried to respond back in September but couldn’t find the right spot to fix the conversation.

>

>Anon, I have a concern here. I’m reading you a certain way and I’m struggling to read you a different way. The problem is that it really seems to me like I said something that makes sense and that your response must somehow misunderstand it. But maybe you really and truly understood me and were appropriately responding back to me.

>

> I’m making the following claims — very limited claims:

> 1. There are people that find “Leftism” personally meaningful. And by that I mean very specifically “they get meaning internally over it in a subjective way.”

> 2. I wasn’t *insisting* that we call “meaning” the same as “valuable” since the word “valuable” has a range of meanings that may or may not

include the sort of short term value that one gets out of ‘personal meaning.’

>

> Does that make sense?

>

> I’m not claiming anything else here.

>

> I screwed up the quoting really back back when we had this exchange.But let me recreate it here to illustrate my concern with your response:

>

> Bruce: So it’s not at all clear why ‘the religion of the left’ even works in the first place as a way of creating such strong meaning for people as a replacement for traditional religion. Thus this is the mystery I’m trying to solve for myself. Also, I want to know this because I’m convinced once I know how the Left is creating so much *meaning* for people, I’ll know how to counter it without destroying the parts of the Left that I appreciate and like.

Captain Buckwheat

captainbuckwheat@gmail.com

Buckwheat posted on the FI list from August 2013 to March 2014. A sample of his writing:

https://groups.yahoo.com/neo/groups/fallible-ideas/conversations/messages/1652

>>> FH:

>>>

>>>> But there were men who were impressed by the simple fact that Roark had built a place which made money for owners who didn’t want to make money; this was more convincing than abstract artistic discussions. And there was the one-tenth who understood. In the year after Monadnock Valley Roark built two private homes in Connecticut, a movie theater in Chicago, a hotel in Philadelphia.

>>>

>>> Why does AR think it’s one-tenth? Does she ever argue for that anywhere? Do you think it’s one-tenth? If not, what’s a more accurate figure?

>>

>> I don’t quite see what she meant there. One-tenth of what? I guess it could mean Roark’s ratio of actual clients to potential clients in a given year. This doesn’t make sense, though, because it means that one-tenth of people looking to build something in that year preferred to have it built by Roark. Surely more than 40 new buildings were erected that year, so that would mean Roark was turning away clients who “understood”. This isn’t mentioned in the book.

>>

>> For comparison, an optimistic guess is that 1/1000 people each year buy a book by Ayn Rand. (*) Buying a book is a lot less of a commitment than buying a house, so we can take that as an upper bound on Roark’s ratio of potential clients to actual clients. The actual figure would have been much smaller, maybe 1/10,000 or 1/100,000.

>>

>> (*) Atlas Shrugged sold 7,000,000 copies in the 56 years since it was first published in 1957, which averages out to 125,000 per year. The US population averaged around 250 million people over that time, so if we optimistically assume all the sales were in the US, that’s 1/2000 people each year. If we count all her books, the number of sales each year might double (again being very optimistic), which would work out to 1/1000.

>

> I think Ayn Rand meant 1/10 of 1/4 of those who read the article that Austen Heller wrote about Roark. Just three paragraphs before the quote mentioned above it reads:

>

> ”Howard,” Mallory said one day, some months later, “you’re famous.” “Yes,” said Roark, “I suppose so.”

> “Three-quarters of them don’t know what it’s all about, but they’ve heard the other one-quarter fighting over your name and so now they feel they must pronounce it with respect. Of the fighting quarter, four-tenths are those who hate you, three-tenths are those who feel they must express an opinion in any controversy, two-tenths are those who play safe and herald any ‘discovery,’ and one-tenth are those who understand. But they’ve all found out suddenly that there is a Howard Roark and that he’s an architect…”

Destructivist

Interest:critical rationalism

deductivist@yahoo.com

Destructivist posted on the FI list from June 2013 to January 2016 and then stopped without explanation.

A sample of his writing:

https://groups.yahoo.com/neo/groups/fallible-ideas/conversations/topics/13819

>> God is a bad explanation. If you want to explain some issue X using god, then there are two possibilities. Either X is the way it is just because god says so, in which case we might as well say “shit happens”. Or X is the way it is because god has a reason Y for liking it that way. In that case, any mechanism that respects the principle Y will do just as well as god, so god is not necessary. For example, if god happens to favour the existence of genes that copy themselves in their environment, then natural selection explains the attributes of genes better than god. So god can be rejected since it is no good as an explanation. Since god can be eliminated from any worthwhile explanation, god is a bad explanation and we can do without god. So the objective truth is that god doesn’t exist.

>

> You say “for example”, but I don’t see how the example conforms to what you talked about before. Let’s consider the situation where god has a reason Y for liking X to be a certain way. Your example says that god happens to favor the existence of genes that copy themselves in their environment. So that’s the way he likes them. Cool. But where (in your example) do you give his reason for liking them to be that way?

Max Kaye

m@xk.io

Max Kaye posted on the FI list from January 2018 to January 2019. A sample of his writing:

>> when ppl ask questions, usually they barely care. so if i think about

>> it much, or put much energy into the issue, i get out of sync with

>> them. they don’t keep up.

>>

>> they do not label their questions as “barely care”, which is

>> dishonest of them.

>

> How would people know if they really care? If it’s shown by long term

action, research, persistence through indirection, etc, then most people

don’t really care about *anything*.

>

> I agree it’s dishonest, but it feels like honesty is a really hard skill

to learn (for an adult in our society). Most ppl think it’s just “don’t

lie”, but it’s clear to me (particularly after reading FH) that this is

only the most superficial way to look at it.

>

> If they’re not able to be honest more generally, how could they label

their questions accurately or even know if they really do care or not?

>

> My guess is the main thing they’re not able or willing to do is estimate

up-front how much time, energy, and indirection they’re willing to put

in/tolerate to answer a question. These are hard skills to learn! And

it’s much easier to be a little dishonest (this is how it seems to them)

than acknowledge they lack these skills and can’t give a good answer.

He stopped discussing without saying why.

Brett Hall

brhalluk@hotmail.com

Brett Hall posted on the FI list from June to August 2015. He posted about a variety of topics including AI. A sample of his writing:

>>>> Don’t you TEACH epistemology – so it’s your job to know it better? (That’s what your twitter says: https://twitter.com/tokteacher ).

>>>>

>>> I teach what is *called* “Theory of Knowledge”. Which *should be* epistemology. It’s actually philosophy-lite with lots of lefty relativism and other nonsense. Which you would expect: from a standard curriculum.

>>

>> So, you teach bad ideas, from a position of authority, to vulnerable students. Fuck you.

>>

> Not all schools are alike. The students know what I think of the bad ideas. Typically they come away from discussions about those ideas with better ideas. Better than they would have, if someone else was trying to present that material.

——

Richard Fine

Interest: TCA

richard.fine@gmail.com

Richard Fine posted on the FI group from July 2013 to June 2016  and he was on the TCS list before that.. A sample of his writing:

https://groups.yahoo.com/neo/groups/fallible-ideas/conversations/messages/16380

>>> How do guns work?

>>

>> you press the trigger and it swings a piece of metal at the back of the bullet. the back of the bullet is gunpowder which blows up. the explosion pushes the rest of the bullet forward. the bullet is in a metal tube (barrel of gun) which controls what direction the bullet goes. (normally if you just had an explosion it’d be hard to control the direction it makes stuff go.)

>

> BTW the idea of ‘explosion’ + ‘tube to control the direction’ is also how car engines work.

>

> Instead of hitting gunpowder with metal to make the explosion, they set gasoline on fire with an electric spark; and instead of the explosion pushing a bullet, it pushes a big chunk of metal (a piston) along a tube. The piston is connected to mechanisms (a crankshaft) that turn the pushing motion into a turning motion for the wheels. Then the piston moves back along the tube to be ‘fired’ again by another explosion. When the car’s going fast, this is happening many times a second.

>

> I think it’s pretty cool that the same idea that makes guns work also makes cars work.

Lulie Tanett

Interest: TCS

luliet@gmail.com

Lulie posted on the FI list from August 2013 to January 2016  and she was on the TCS list before that.. A short sample of her writing:

https://groups.yahoo.com/neo/groups/fallible-ideas/conversations/messages/3103

>>> If you have unconventional views on relationships/friendships, like

>>> that it’s good to act on mutual self-interest rather than altruism,

>>> how do you manage the clash of different expectations?

>>>

>>> Like, it’s reasonable to expect people will usually behave according

>>> to society standard. Most people don’t know of other ways of

>>> behaving.

>>>

>>> Or they think that you can either agree with them or be a bad

>>> person.

>>> So if you seem like a good person, people think that you’re just

>>> doing

>>> the conventional thing.

>>>

>>> Even if you tell them you believe in selfishness, they typically

>>> have

>>> no idea what that means (unless they’ve read Ayn Rand or similar).

>>> So

>>> they assume you just follow convention but say fancy things which

>>> don’t make a difference in practice.

>>>

>>> Do you have to just be super aware of when they might be doing

>>> things

>>> contrary to your views — like self-sacrifice, assuming obligations,

>>> etc. — and avoid or stop the problem interaction?

>>>

>>> Sounds like a lot of effort thinking about convention. But how else

>>> can you avoid being misleading to people?

>>>

>>> Even if you didn’t have a responsibility to not be misleading to

>>> people, if you’re often misleading then problems will come up.

>>> (People

>>> will be like, hey I’ve been super selfless, why aren’t you repaying

>>> me

>>> with selflessness?)

>>>

>>> The problem is that saying explicitly that you don’t believe in

>>> obligations, self-sacrifice, etc. *won’t help* because they simply

>>> won’t understand what you’re referring to without learning a lot

>>> about

>>> the subject.

>>

>> stop doing tons of conventional social signalling and most of the

>> problems go away

>>

>> if you don’t signal you are normal, people won’t expect you to act

>> normal so much

>

> What is conventional social signalling and how do you not do it?

>

> Wouldn’t acting weird make the problem worse, because then people

> won’t know what to expect of you and won’t like that? Would be accused

> of things like “hard to read” or “hard work”?

Lulie still likes to write, but she doesn’t like criticism:

https://conjecturesandrefutations.com/2019/03/16/lulie-tanett-vs-critical-rationalism/

Michael Smithson

michael.r.smithson1@gmail.com

Michael Smithson posted from May 2013 to March 2014. A sample of his writing:

> On Thu, Feb 13, 2014 at 12:33 AM, Elliot Temple <curi@curi.us> wrote:

>>

>> On Feb 12, 2014, at 9:16 PM, Michael Smithson <michael.r.smithson1@gmail.com> wrote:

>>

>>> On Wed, Feb 12, 2014 at 11:32 PM, Elliot Temple <curi@curi.us> wrote:

>>>> http://www.paulgraham.com/say.html

>>>>

>>>>> What scares me is that there are moral fashions too. They’re just as arbitrary, and just as invisible to most people. But they’re much more dangerous. Fashion is mistaken for good design; moral fashion is mistaken for good. Dressing oddly gets you laughed at. Violating moral fashions can get you fired, ostracized, imprisoned, or even killed.

>>>>

>>>> The man who wrote this is responsible for censoring me today when I criticized a moral fashion. [0]

>>>>

>>>> I refer to the psychiatry discussion here (I’m xenophanes): https://news.ycombinator.com/item?id=7227820

>>>

>>> I suppose one can only be grateful for the fact that “lynch mob” has a

>>> mostly metaphorical use these days.

>>

>> ya. that makes PG’s cowardice all the more damning, btw. sure i got in trouble, got punished, but it’s OK, i’m safe. not real physical danger. the dangers are things like feeling bad because people sad bad things about me, and having a worse reputation with bad people.

>

> You know what I think is interesting? I think there’s some number of

> people who *would* die to promote their values but won’t *live* for

> them.

> What I mean is, if say Nazis the Sequel came to rule Europe, they’d

> risk their personal safety to hide Jews, but they won’t be too

> critical with someone at a cocktail party over a mild anti-semitic

> remark.

>

> What do you think? Do some people act as if the disapproval of randoms

> is a fate worse than death? If so, why?

Becky Moon

Interest: TCS

beckyam@gmail.com

https://groups.yahoo.com/neo/groups/fallible-ideas/conversations/topics/495

Becky Moon posted to the FI group between June and July 2013 and she was on the TCS list before that.. A sample of her writing:

https://groups.yahoo.com/neo/groups/fallible-ideas/conversations/messages/504

>>> David once gave me a simple counter-example about what’s wrong with induction – something along the lines of a chicken being fed every day by a farmer and expecting to continue being fed but then one day ends up being the farmer’s dinner.

>>

>> That is Bertrand Russell’s example. One Russell quote googling turns up is, The man who has fed the chicken every day throughout its life at last wrings its neck instead, showing that more refined views as to the uniformity of nature would have been useful to the chicken.

>

> I have read a little Bertrand Russell. I can’t remember whether I ran across the example there as well. David may have even mentioned where he got the story. It was 8 or more years ago. 😛

>

>>> While I agree that induction isn’t preferable to a good explanatorytheory, I still think it might frequently be useful.

>>>

>>> The way I’m thinking of it, though, might have a different term that

>>> applies and might not really be what people mean by induction…

>>>

>>> I think of it as a sort of pre-theory knowledge – noticing that there is a pattern.

>>

>> Noticing *which* pattern(s)?

>>

> It was meant generally. There are a lot of patterns – as I see you mention

below.

>

> The sun appears to rise and set daily. The moon appears to rise and set most evenings. The weather tends to get warmer overall and then cooler overall. I’ve heard in some places there are even distinct seasons – winter, spring, summer, fall. 😉 If plants don’t receive water, eventually, they dry up and stop growing. Animals (and people) tend to start off small and get bigger over time and then seem to level off. If one lets go of an object in midair, it usually tends to fall although there are a few unusual items that don’t fall or fall very very slowly (balloons, feathers). Water when exposed to certain cold enough temperatures seems to become a solid. I could go on indefinitely with examples.

Dan Frank

Interest: TCS

danjfrank@gmail.com

Dan Frank posted on the FI list from June 2013 to August 2014 and he was on the TCS list before that. A sample of his writing:

https://groups.yahoo.com/neo/groups/fallible-ideas/conversations/messages/1305

>On Mon, Sep 9, 2013 at 5:46 PM, Elliot Temple <curi@curi.us> wrote:

>>

>> On Sep 9, 2013, at 3:29 PM, Dan Frank <danjfrank@gmail.com> wrote:

>>

>>> On Mon, Sep 9, 2013 at 11:09 AM, Elliot Temple <curi@curi.us> wrote:

>>>>

>>>> On Sep 9, 2013, at 6:15 AM, Anontwo Too <anontwotoo@gmail.com> wrote:

>>>>

>>>>> On Mon, Sep 9, 2013 at 12:59 PM, Jordan Talcot <jordan.talcot@gmail.com> wrote:

>>>>>>

>>>>>> On Sep 9, 2013, at 1:33 AM, Anontwo Too <anontwotoo@gmail.com> wrote:

>>>>>>

>>>>>>> Won’t they [children] be interested at one point [in letters and reading], when they notice that they are

>>>>>>> useful to people?

>>>>>>

>>>>>> There are a lot of things that are useful to people. Why are you assuming that all children will notice that letters in particular are useful? Do you think that all children will notice ALL useful things? Do you think that there is something in the letters themselves that will make children notice them?

>>>>>

>>>>> Because letters are very fucking useful. A big deal. Not just a tiny bit useful.

>>>>>

>>>>>> It’s like a girl getting a period and not finding tampons or sanitary

>>>>> towels useful. “Oh, I’m not bothered, I’ll just bleed over the place.”

>>>>

>>>> Or like most people making philosophical mistakes that massively fuck up most of their lives, and not finding philosophy useful and interesting? “Oh, I’m not bothered, I’ll have a string of failures for my life.”

>>>>

>>>> Except, that happens… That is what most people actually do.

>>>>

>>>> Just because some knowledge would be extremely valuable to someone, and they have a pressing need of it, and it’s available, does NOT mean they will automatically find it, want it, value it, figure out how to learn it, learn it, etc, etc

>>>>

>>>

>>> They need to be persuaded by something that this knowledge is actually

>>> valuable to them.  In our culture, though, there is some knowledge

>>> that is much more “obviously” valuable than others.  e.g. here is a

>>> good way to deal with a cut on your hand: stop the bleeding with

>>> something like a paper towel or a napkin or gauze, and then maybe

>>> putting some antiseptic and/or bandage on it if it is a large enough

>>> cut.  Or doing the first part and then going to an emergency room to

>>> get it stitched if it’s even larger.  Is this controversial?

>>

>> It’s unclear to me that that that knowledge is particularly valuable to personally, individually have.

>>

>> First of all, how often do you cut your hand? Big enough to need to do anything?

>>

>> Second, if you are incompetent about hand cuts, so what? Someone will help you. Maybe you’re alone and can’t quickly get someone (but still who cares, no problem, google it or phone someone to tell you what to do). But most people most of the time have reasonably quick access to someone IRL who would help them with a cut.

>>

>> You can be like “omg i cut my hand. omg omg wtf do i do? it hurts it hurts!! please help me” and people will be like “don’t worry, just calm down, it’ll be fine. here let me wash that off for you and get you a bandaid” or whatever. people are nice and helpful about that kind of thing, so you don’t really have to know much yourself.

>>

>> I do think some knowledge about this is worth having yourself but I could easily see someone disagreeing, and I don’t think it makes that much difference either way.

>

> Interesting.  I think that our culture makes it pretty easy-to-get

info and the cost of learning it is very low, so it’s worth getting

(including getting knowledge like “I don’t know much about what is a

bad cut or not, but this is still bleeding a lot after five minutes so

I’m going to go to an expert just in case,” which is a form of

knowledge about cuts that I’m talking about).

>

> Even if you don’t cut your hand often.  Like the way that knowledge

about what to do if someone steals your car is really easy to get in

our culture (call 911.) even though you don’t often need it.

>

>>(And there’s also the issue of when to learn it. I think many people delay learning about how to handle cuts until after having more than one cut. Then they end up learning it cause they go through the process multiple times and remember some stuff or ask questions about what’s happening, not because they ever decided that today is the day to go spend 15 minutes studying how to deal with hand cuts)

>>

>

> Agreed.  People can begin learning to read the same way, can’t they?

Even in mainstream coercive culture isn’t this a common way people

learn some reading?  e.g. “read this book to me.” and then they

remember some stuff and on the fifteenth or hundredth time they ask

some questions about what’s happening and then they learn a bit.  They

don’t have to decide “today is the day” to go spend 15 minutes

learning to read.

>

> Not too long after this they are typically coerced to do just that, of

course.  But even in many conventional households the other way of

learning reading often happens first, for a year or two at least.

Maybe not in the really hardcore “you must read by age 4” households

or whatever? But it’s not uncommon.

Guilherme Neto

guuinetto@gmail.com

Guilherme Neto posted on the FI group from April 2018 to November 2018. A sample of his writing:

https://groups.yahoo.com/neo/groups/fallible-ideas/conversations/messages/28710

> http://curi.us/1443-children-dont-exist

>>When a child doesn’t like school, it certainly never occurs to parents that they are dealing with a person who has a preference and a life, and perhaps should have some control over his life.

>

> Parents do think that kids will eventually have preferences and a

life. Its like kids are only potential people.

>

>>Instead, all that exists to them is a ball of clay which has the potential to be an adult with the skill to run its own life, and will get there not by practicing doing that but by molding.

>

>Its interesting how parents expects their kids to be independent and

how the road to it is not only centered on dependency but averse to

independence.

>

> People are supposed to became capable of running their own life by

spending their first years ignoring their own judgment and following

the orders of authorities.

The Bitty Guy

letmedisposeofthis@gmail.com

The Bitty Guy posted on the FI group from April 2014 to January 2018. The following message by Elliot Temple shows some representative quotes from The Bitty Guy:

https://groups.yahoo.com/neo/groups/fallible-ideas/conversations/messages/3246

> On Apr 30, 2014, at 6:49 PM, The Bitty Guy <letmedisposeofthis@gmail.com> wrote:

>

>> On Wed, Apr 30, 2014 at 7:24 PM, Elliot Temple <curi@curi.us> wrote:

>>> On Apr 30, 2014, at 6:56 AM, The Bitty Guy <letmedisposeofthis@gmail.com> wrote:

>>>> Why is it at all necessary to call me, or my ideas, ‘left-wing’ or ‘antisemitic’?

>>

>>

>> If I showed you a factually accurate map of the Middle east, would you

>> call that antisemitic as well?

>

> No.

>

>>

>>>> Although the jury is out as to whether your posts qualify as ‘high quality’, they are still definitely above average; among the best I have yet to find, in the limited time I have to look.

>>>>

>>>> In all events, I do respect the time and discipline devoted to this listserve/blog. For now, at least (until I can find that elusive place in the bloggosphere that categorically discourages all personal attacks, ad hominem and otherwise), I value this one.

>>>>

>>>> Importantly, having my ‘talking points’ accepted by any of my readers, with or without debate, is not my primary purpose for being here. I am open to changing my opinions, and, with participation in the realm of ideas, hope to learn a thing or two that will improve my philosophy and knowledge of world events.

>>>>

>>>> Furthermore, despite your unfortunate attacks,

>>>

>>> i’m not sure what attacks you’re referring to.

>>

>>

>> Are you really, honestly not aware that inserting anti-semite into the

>> dialogue constitutes an attack? Maybe it would help to remind you if

>> you used the word “anti-jew bigot” instead.

>>

>>

>> It is the Anti-Semite Smear. Given the frequency with which you bring

>> it up, and in keeping with your convention of using acronym for

>> commonly used terms, how about if I simply call it the “A.S.S.

>> attack”? Do you honestly not know that it constitutes a personal

>> smear, that can have social, professional, and even legal consequences

>> for the victim? In many cases, possibly yours included, this venomous

>> bite is precisely why the term is used, and how it shapes and silences

>> debate.

>

> Do you believe that

>

> 1) some things are anti-semitic and calling them as such is reasonable

>

> or

>

> 2) nothing is anti-semitic, and the term is always and only a smear

>

> ?

>

>

> If (1), would you agree that therefore some kind of argument or explanation is necessary to differentiate between smears and non-smears, before you can assume it’s a smear? And you didn’t provide such an differentiating explanation.

>

>

>

> And do you believe that

>

> 1) using quotes improves discussion

>

> or

>

> 2) refusing to use exact quotes of things you are complaining about is a better approach

>

> ?

>

>

>

>>> if you thought “left-wing talking point” is a personal attack, i disagree. i consider it a factually accurate descriptive statement about the text/ideas you posted (not about you as a person).

>>

>>

>> Are you serious? The ‘left-wing talking point’ is irrelevant. Who

>> really cares about that? No one is fired, sued, or removed from office

>> over that (quite the opposite)…it’s the A.S.S. attack I was

>> referring to… Don’t A.S.S. me, bro!

>

> Yes I seriously, honestly find it hard to tell what you are referring to when you refuse to use quotations or carefully explain. We see the world differently. Your choices are to communicate or not be understood.

>

> Elliot Temple

> http://www.fallibleideas.com

> www.curi.us

Kristen Ely

Interest: TCS

kristeneely@yahoo.com

Kristen Ely posted from May 2013 to March 2017. A sample of her writing:

> On Nov 4, 2015, at 10:13 PM, Leonor Gomes lnrgms@gmail.com [fallible-ideas] <fallible-ideas@yahoogroups.com> wrote:

>

>> 2015-11-05 2:43 GMT+00:00 Rami Rustom rombomb@gmail.com

>> [fallible-ideas] <fallible-ideas@yahoogroups.com>:

>>> On Wed, Nov 4, 2015 at 4:39 AM, Leonor Gomes lnrgms@gmail.com

>>> [fallible-ideas] <fallible-ideas@yahoogroups.com> wrote:

>>>

>>>> 2015-11-04 10:23 GMT+00:00 Rami Rustom rombomb@gmail.com

>>>> [fallible-ideas] <fallible-ideas@yahoogroups.com>:

>>>>

>>>>> On Tue, Nov 3, 2015 at 9:42 AM, Leonor Gomes lnrgms@gmail.com

>>>>> [fallible-ideas] <fallible-ideas@yahoogroups.com> wrote:

>>>>>

>>>>>> but at least tries to have some

>>>>>> original ideas of his own.

>>>>>

>>>>> this isn’t a goal. at least it’s not a GOOD goal.

>>>>>

>>>>> one should go after the best ideas, whether he originated them or not.

>>>>

>>>> go after? or learn them? think about them?

>>>

>>> yes.

>>>

>>>>> caring that one’s ideas are original is a status-seeking mistake.

>>>>

>>>> why is it a status seeking mistake?

>>>

>>> i thought you meant something to the effect of: ignoring tradition and

>>> doing your own thing for the sake of being ORIGINAL.

>>

>>

>> Howard Roark did that.

>

> I don’t think he did.

>

> He wasn’t trying to be original for the sake of being original. He wanted to build in ways that make sense, that fit the purpose of the building, using his own standards, his own judgment, to the best of his ability.

>

> He did say:

>

>> I inherit nothing. I stand at the end of no tradition.

>

> And that may be a mistaken way of thinking of tradition. But I don’t think he actually did *ignore* tradition. He looked at traditions in architectural design and criticized them. He didn’t want to do things just because they had always been done that way.

>

> And he didn’t ignore good traditions of structural engineering.

Joao Duarte

joao.monteiro.duarte@gmail.com

Duarte posted on the FI list from February to May 2019. A sample of his writing:

>On Fri, May 17, 2019 at 6:57 PM anonymous FI

><anonymousfallibleideas@gmail.com> wrote: 

>>

>>

>> On May 17, 2019, at 7:50 AM, João Duarte

>> <joao.monteiro.duarte@gmail.com> wrote: 

>>

>>> On Thu, May 16, 2019 at 11:44 PM Elliot Temple <curi@curi.us> wrote:

>>>>

>>>> https://blog.rongarret.info/2009/04/on-shadow-photons-and-real-unicorns.html?showComment=1557966795815#c1990820171770996365 

>>

>>

>>

>>>> I understand not knowing this stuff. But something is going really

>>>> wrong when people’s attempt at reading involves making up nonsense

>>>> that just isn’t in the paper. He can’t tell what it says, but

>>>> instead of realizing he doesn’t understand he just makes wild

>>>> guesses. The stuff RG has come up with goes beyond misreadings of the

>>>> paper to making stuff up that has nothing to do with the paper. I

>>>> think people learn this method in school, where it’s common.

>>>>

>>>> RG wants to hear none of this, which is part of how he stays so wrong

>>>> and confused (and is why I’m posting here instead of another reply

>>>> on his blog). When I told him a subset of the problems, he said, “I’m

>>>> sorry, Elliot, but I just can’t deal with your level of nit-pickery.

>>>> Good bye.”

>>>>

>>>> I tried to be helpful to him initially by writing a long, serious,

>>>> edited explanation of some CR material. His response was to delete

>>>> all of it and never engage with any of how CR works, and instead to

>>>> alternate between claiming to agree with me and claiming i’m wrong

>>>> (while also misquoting and making other errors).

>>>>

>>>> If anyone has any ideas about how to help such a person, or how to

>>>> find better people, please share!

>>>

>>> I think you could give some caveats when you criticize things that can

>>> be seen as trivial. Or you could say beforehand that you, sometimes,

>>> can be misjudged as someone who is acting in bad-faith. You can be

>>> clear about your intentions before you start a discussion (this could

>>> have helped when he thought you were trying to be intellectually

>>> superior to him). Because you are uncommonly critical, people can have

>>> a bad perception. I had that “feeling” before and now I think you are

>>> really trying to help and learn.

>>

>> You didn’t say what caveats to say or what to say about intentions. You

>> didn’t give any sentences that you think would help, which could be

>> tried or criticized.

>

> This advice can be tried without giving examples. Just be honest about  the intentions you have when having a discussion for the first time if  they are often misinterpreted. The caveat is to say why the thing ET  is criticizing although may seem irrelevant it isn’t. It’s difficult  sometimes to know when the other person will think that. But it’s  better to be safe.

Balázs Fehér

feher.balazs.feher@gmail.com

Balázs Fehér  posted on the FI list from June 2013 to April 2014. A sample of his writing:

>2014-04-29 10:25 GMT+02:00 Alan Forrester <alanmichaelforrester@googlemail.com>:

>> On 28 April 2014 22:48, Balázs Fehér <feher.balazs.feher@gmail.com> wrote:

>>> 2014-04-28 19:28 GMT+02:00 Alan Forrester <alanmichaelforrester@googlemail.com>:

>>>> On 28 April 2014 16:12, Balázs Fehér <feher.balazs.feher@gmail.com> wrote:

>>>>> From BOI chapter one terminology:

>>>>>

>>>>> “Principle of induction: the idea that ‘the future will resemble the

>>>>> past’, combined with the misconception that this asserts anything

>>>>> about the future.”

>>>>>

>>>>> I would have a question. I understand that the idea that the future

>>>>> will resemble the past is contradicted, for example, each time a new

>>>>> design of microchip is created. However i don’t understand the second

>>>>> part. Why ‘the future will resemble the past’ does not assert anything

>>>>> about the future? Theories about the future are the same as theories

>>>>> about the past (in case of universal theories, which are time

>>>>> invariant).

>>>>

>>>> A universal theory makes predictions about the past and the future and

>>>> so says that the past and the future are alike in the sense that they

>>>> both follow the theory.

>>>

>>> Yes.

>>>

>>>> This is totally irrelevant to the controversy

>>>> over induction because induction is not supposed to be about what you

>>>> can know when you have a theory, it is supposed to be about how

>>>> theories are created and confirmed.

>>>

>>> So the second part of DDs sentence is irrelevant? Or it refers to

>>> something else than what I implied?

>>

>> Inductivism is a variety of justificationism so it is about how to

>> justify stuff. Justifying anything is impossible but the point of the

>> terminology snippet is to explain how the inductivists think about

> what they’re doing. The inductivists think that idea that the future

>> resembles the past would not be enough on its own to show that you can

>> justify universal theories using information about the past. To

>> justify induction they think you would have to add that information

>> about the past somehow justifies the future implications of universal

>> theories we have come up with today, hence the second part of the

>> sentence.

>

> I see, thanks!

>

>>

>> But as DD points out in part of that chapter, it doesn’t really matter

>> for inductivism that the future resembles the past: such a principle

>> will not save inductivism.

>>

>> It’s a better idea to actually quote and discuss arguments rather than

>> the terminology part of the book which serves at best as a quick

>> reminder of some argument you already understand. If you don’t

>> understand an argument the terminology section will not help you and

>> there is no point in discussing it.

>

> Yeah, I guessed the argument was in the chapter, I just did not remember so asking was faster 🙂

Dennis Hackethal

Interest: critical rationalism

dennis.hackethal@googlemail.com

Dennis came to FI with an interest in Popperian epistemology and artificial intelligence. He currently has a podcast about AI:

https://soundcloud.com/dchacke

He posted between December 2018 and April 2019. A sample of his writing:

https://groups.google.com/d/msg/fallible-ideas/wjMd33c5Jnw/eRJrzf5kBgAJ

> I was struggling the other day to explain to someone why the growth of knowledge is inherently unpredictable. I *think* I can explain it in terms of “it’s a generic algorithm, and a genetic algorithm has unpredictable output”, but unless the other party is already familiar with the concept of knowledge being the result of a genetic algorithm, that doesn’t go very far. It also made me think that a genetic algorithm is unpredictable *to a degree*. If someone runs a genetic algorithm for eg the traveling salesman problem, they know it’s going to return a solution to the problem in terms of distances etc, and not something completely unexpected. So there’s at least some way to constrain the space of possible answers. I don’t think it’s possible to constrain human answers in this way, but I don’t think I understand why. I also don’t know if probabilistic = unpredictable (my guess is “no”).

 

%d bloggers like this: