Find it Fast: Extracting Expert Information From Social Networks, Big Data, Tweets and more (Signed by Author)
My latest (6th) edition on how to do find top notch sources and do excellent research in the age of social media, real time, big data and more (CyberAge, 2015).
Here is an excerpt from the Preface:
Find it Fast, first published in 1987, was most recently revised for its 5th edition in the year 2000. That’s eons ago in the information world, and an awful lot has happened in that world since we last left off.
For example: Web 2.0, Wikipedia; twitter; citizen reporting; sentiment detection; real-time search, social media, Google+, Pinterest, Intstagram, Tumblr, YouTube, Vine, tagging, cloud collaboration, the wisdom of crowds; collaborative filtering, LinkedIn, Klout, Siri, Snapchat, social media monitoring. Facebook and Foursquare and Flickr—and more!
The fact is, back in the year 2000 those concepts and sites were not around, or barely emerging.
That’s an enormous amount of change in the information world.
But a lot has not changed too—specifically, the elements of doing good research.
For example: understanding how sources differ, knowing which resource to use, finding and evaluating experts, verifying information, understanding surveys, controlling one’s biases, asking the right questions, assessing source credibility, disregarding the noise; understanding what’s truly significant, knowing when to conclude a project.
The More Things Change….?
So, can we say, then, that fundamentally doing good research has not changed, but it’s only the sources, tools and strategies that have changed?
Well, not quite. While yes, the sources, tools and strategies have clearly changed, they have, in fact, changed so dramatically, that they have produced a revolutionary rethinking of many aspects of what constitutes good research.
Why are these changes truly revolutionary? We have to start by looking back to the mid-1990s, with the arrival of the Internet, of course, which demolished the high barriers of entry for becoming a publisher. The last time we had a change that revolutionary for creating and distributing information was over 600 years ago when a German by the name of Johannes Gutenberg came up with the concept of movable type.
By the early 2000s, we saw the emergence of the social Web--or Web 2.0 as it was then known. Initially, this two-way interactive Web was primarily found in the form of blogs, but was then quickly followed by an explosion of other two way information sharing platforms: Flickr, YouTube, Tumblr, MySpace, Facebook and many others. The social web permitted anybody to share his or her opinions, knowledge, and content, and become part of a larger global conversation. As a result, we all have had to rethink some fundamental assumptions about what constitutes a credible source and, therefore, what it means to do “good” research.
Who’s an Expert?
For example, who counts as a trusted source today? Before the social Web, one established his or her authority on a subject by their credentials (e.g. a PhD in history, an MA in Design etc.), or by their published works, professional presentations, or other ways that demonstrated recognition by institutional gatekeepers such as universities, publishers, and editors.
So if a person could manage to make it through those hurdles—write the dissertation; get a presentation approved, have an article published, be quoted in a news story--then he or she was given the mantle of a legitimate authority. The rest of us could then feel assured that he or she was in fact a credible source of expertise.
The Internet and more recently, social media has forced us to reconsider, or at least expand, who we feel could be considered an authority, and whose views and knowledge are worth taking seriously.
Today we live in an information environment where the views of countless people without any formal—or even informal--credentials, who have never been peer reviewed, presented at a conference, nor been quoted in a newspaper, or even created a piece of work, are given a great deal of credence. Consider, for example, the popularity and sometimes even prominence of certain bloggers, “citizen-journalists”, anonymous YouTube producers, prolific Wikipedia entry writers, or just someone with a high “Klout” score. Some of these people are trusted as credible sources. Should they be given that recognition just because they are popular? Should popularity really confer authority and expertise?
That critical question will be discussed later in this book
And what about the whole “wisdom of crowds” meme: that, in certain circumstances, a larger group of ordinary people are smarter and more likely to come to a more accurate decision than any individual --even if that individual is an expert?
These are the kinds of questions that are, in fact, creating—yes, revolutionary—changes in how we all create, find and assess information sources and will be treated in detail.
Big Data and Big Challenges
This question of who today deserves to be considered a legitimate authority and credible source is not the only dramatic change facing information searchers today. Of course, there is little difficulty anymore in finding a simple fact or answer, or even turning up a good description of a place, person, or thing. Linking to a search engine, keying (or speaking) a few words into Google and getting matching results is as easy as it gets.
But there still remain significant challenges facing those with more involved research tasks. One is simply how to sort through and make sense of the enormous streams of incoming realtime data continually washing over us. The fact that we are all publishers and broadcasters now just means the generation of more and more data—from tweets to video clips--making it all that much harder to get clarity on what truly deserves our precious and limited attention.
One increasingly popular approach for making sense of the growing ocean of information is to rely on “big data”--about the hottest buzzword in the business and technology world. The term describes the use of super powerful computers to collect, crunch and analyze the enormous and ever increasing information being produced these days. Sources generating data today range from people’s social media profiles to sensors on household objects, supermarket scanners, weather station reports, smartphone geolocation data and countless others.
The promise of big data is its ability to deeply analyze so much disparate data, find correlations, and surface previously hidden patterns to provide new insights and drive more informed decisions. The big data phenomenon also presents challenges for the researcher, and described later in this book.
The Value of Find it Fast in the Digital Age
You may be wondering about the value of some of the items included in this book, or even the concept of the book itself.
Why, for instance, have I’ve bothered to include certain, seemingly old fashioned print directories and indexes in the library chapter? While there are a few reasons why print is still valuable as discussed in Chapter 1, one subtle, but quite important reason is that using these these serves as an important check on your Internet searching, which generally favor giving the highest rankings to the most popular sites. This is fine as far as it goes, (much more on that process in Chapters 3 and 5), but what it means is that while you are likely to get a lot of results published by the best known and used news and information sources like, say Huffington Post, The New York Times, CNN, the Guardian newspaper, TMZ, etc., you are much less likely to turn up, articles published in very small, niche, and lesser known journals and news sites. And sometimes these turn out to be your best sources.
By adding an old fashioned print index and directory to your set of research tools, you can at least complement this ranking method of the search engine, by using a source with a different mechanism for discovery, and thereby override the built in bias of the search engine. You then have a more rounded set of results than you would have received just from a Google search—and that’s a good thing.
Given the speed of change in this arena, you may also wonder how long a book like this can remain relevant and useful. It’s a good question.
While the Web sites and search tools referenced in this book are bound to change, I’ve addressed this issue in two ways: first, by only including sources that have been around or are most likely to be around for a long time and for the the foreseeable future. But even more importantly the heart of this book is not about sites or the technologies of the moment. Instead it is all about cultivating a deeper understanding of the critical principles of what it means to be a good researcher in the digital age.
Finally, I’ll conclude with a few more words on the journey this book has taken since its first edition in 1987.
The Internet provides explanations for problems or concerns that may have stumped us for years…The sum total of all of our knowledge and thoughts, in fact, is increasingly being made public, linked and accessible. All sorts of artifacts of knowledge are being digitized, made searchable and available…
Of course, the Internet will never answer the deepest human questions…or will it? And finding answers to one set of questions allows for, and even compels, the emergence of a new set...and so our questioning will surely continue.
And so it continues today…
Here’s to getting your questions answered—and then generating new and even better ones!