It has long been a concern of philosophers that we need to believe that what we do is some sort of investment for the future - this is summed up in the quote by Martin Luther (1483-1546)
“Even if I knew that tomorrow the world would go to pieces, I would still plant my apple tree.”
Existentialists, for example, suppose that life has a sort of inertia that makes us carry on, even if the whole thing is pointless. Now we have some proof that this isn't quite the whole story:
In this work, we use player behavior during the closed beta test CBT of the MMORPG ArcheAge as a proxy for an extreme situation: at the end of the closed beta test, all user data is deleted, and thus, the outcome (or penalty) of players' in-game behaviors in the last few days loses its meaning.
We analyzed 270 million records of player behavior in the 4th closed beta test of ArcheAge. Our findings show that there are no apparent pandemic behavior changes, but some outliers were more likely to exhibit anti-social behavior (e.g., player killing). We also found that contrary to the reassuring adage that "Even if I knew the world would go to pieces tomorrow, I would still plant my apple tree," players abandoned character progression, showing a drastic decrease in quest completion, leveling, and ability changes at the end of the beta test.
Of course, this isn't quite the same situation as the player acts as a super being who can remember what happened after the user data is deleted. There is still some investment in the future outcome.
At a more practical level we have:
Our study brings practical and theoretical implications to game industry and research communities. Practically, our findings on irregular behavior of individual churners could be an alarm, or early-warning, of their leaving. As addressing churners remains a consistent goal of game developers, our work can help inform the development of retainment strategies, such as offering incentives or new interactions to help them become attached to the virtual world. Also, what actions players increasingly or decreasingly perform when the end of the CBT comes provides guidance on how to run the CBT; some features should be tested earlier because players abandon them when the end of the CBT comes.
The idea of using the blind wisdom of the crowd is a popular idea and it is believed that it is the combining of many unbiased estimates that makes the collective estimate better. The problem is that in many cases humans are not unbiased. Can letting the crowd deliberate the problem help?
The aggregation of many independent estimates can outperform the most accurate individual judgment. This centenarian finding, popularly known as the 'wisdom of crowds', has recently been applied to problems ranging from the diagnosis of cancer to financial forecasting.
It is widely believed that the key to collective accuracy is to preserve the independence of individuals in a crowd. Contrary to this prevailing view, we show that deliberation and discussion improves collective wisdom.
We asked a live crowd (N=5180) to respond to general knowledge questions (e.g. the height of the Eiffel Tower). Participants first answered individually, then deliberated and made consensus decisions in groups of five, and finally provided revised individual estimates.
We found that consensus and revised estimates were less biased and more diverse than what a uniform aggregation of independent opinions could achieve. Consequently, the average of different consensus decisions was substantially more accurate than aggregating the independent opinions.
Even combining as few as four consensus choices outperformed the wisdom of thousands of individuals. Our results indicate that averaging information from independent debates is a highly effective strategy for harnessing our collective knowledge.
Fake news, is big news at the moment. So what role did Twitter play in the 2016 election?
The 2016 U.S. presidential election has witnessed the major role of Twitter in the year's most important political event. Candidates used this social media platform extensively for online campaigns. Millions of voters expressed their views and voting preferences through following and tweeting.
Meanwhile, social media has been filled with fake news and rumors, which could have had huge impacts on voters' decisions. In this paper, we present a thorough analysis of rumor tweets from the followers of two presidential candidates: Hillary Clinton and Donald Trump.
To overcome the difficulty of labeling a large amount of tweets as training data, we first detect rumor tweets by matching them with verified rumor articles. To ensure a high accuracy, we conduct a comparative study of five rumor detection methods. Based on the most effective method which has a rumor detection precision of 94.7%, we analyze over 8 million tweets collected from the followers of the two candidates.
Our results provide answers to several primary concerns about rumors in this election, including: which side of the followers posted the most rumors, who posted these rumors, what rumors they posted, and when they posted these rumors. The insights of this paper can help us understand the online rumor behaviors in American politics.
The conclusions are also interesting:
Many interesting rumor tweeting patterns are discovered, including:
1) more rumors are posted at election time than on the average;
2) rumor tweeting is dominated by a small group of users;
3) users post rumor related tweets to debunk rumors about their candidate or slander the opponent;
4) rumor tweeting erupts mainly in three types of occasions (key points in the presidential campaign, upon controversial emergency events, upon unique events), etc.
but are more or less what you might already believe.
An announcement the appeared yesterday on the Intel Developer Forum website signals that the event scheduled to take place in San Francisco in August will not take place, nor will there be any future [ ... ]