Entries tagged as linkroll
Related tags
No related tags.
Sunday, August 5. 2012
-
Interesting comparison of Hadoop today with the Linux story from the past. This could mean Hadoop/MapReduce as state of the art around 2020.
tags: bigdata cloud technology linux
-
While Hadoop is all the rage in the technology media today, it has barely scratched the surface of enterprise adoption
-
Hadoop seems set to win despite its many shortcomings
-
still in the transition from zero per cent adoption to one per cent adoption
-
IBM points to a few specific deficiencies
-
lack of performance and scalability, inflexible resource management, and a limitation to a single distributed file system
-
IBM, of course, promises to resolve these issues with its proprietary complements to Hadoop
-
Hadoop is batch oriented in a world increasingly run in real-time
-
customers are buying big into Hadoop
-
it’s still possible that other alternatives, like Percolator, will claim the Hadoop crown
-
Back in 2000 IBM announced that it was going to invest $1bn in advancing the Linux operating system. This was big news
-
it came roughly 10 years after Linus Torvalds released the first Linux source code, and it took another 10 years before Linux really came to dominate the industry
-
The same seems true of Hadoop today
-
we’re just starting the marathon
-
Three data-mangling job sites, all only for the US
tags: bigdata career
-
Bright, one of several new companies
-
another new job site, Path.to
-
Gild, a third major player
-
tags: bigdata datascience technology opinion
-
Thomas H. Davenport, Paul Barth and Randy Bean
-
how do the potential insights from big data differ from what managers generate from traditional analytics?
-
1. Paying attention to flows as opposed to stocks
-
the data is not the “stock” in a data warehouse but a continuous flow
-
-
organizations will need to develop continuous processes
-
data extraction, preparation and analysis took weeks to prepare — and weeks more to execute
-
conventional, high-certitude approaches to decision-making are often not appropriate
-
new data is often available that renders the decision obsolete
-
2. Relying on data scientists and product and process developers as opposed to data analysts
-
the people who work with big data need substantial and creative IT skills
-
programming, mathematical and statistical skills, as well as business acumen and the ability to communicate effectively
-
-
started an educational offering for data scientists
-
3. Moving analytics from IT into core business and operational functions
-
new products designed to deal with big data
-
-
Relational databases have also been transformed
-
Statistical analysis packages
-
-
“virtual data marts” allow data scientists to share existing data without replicating it
-
traditional role of IT— automating business processes — imposes precise requirements
-
Analytics has been more of an afterthought for monitoring processes
-
business and IT capabilities used to be stability and scale, the new advantages are based on discovery and agility
-
discovery and analysis as the first order of business
-
IT processes and systems need to be designed for insight, not just automation
-
The title is misleading: It’s not about what DS is. It’s rather a vision of the ideal solution.
tags: datascience technology opinion
-
the old state and the ideal future state, which he calls “Analyst 1.0” and “Analyst 2.0,”
-
Analyst 1.0 as the state of maturity achieved by using the last generation of business intelligence tools
-
Analyst 1.0 has some coding skills, and perhaps writes an SQL query here and there
-
inflexibility of data warehouses and relational databases
-
Our current state of affairs, which we’ll call Analyst 1.5, finds us in limbo
-
two primary limitations: the immense size and variety of the data, and the complexity of the tools needed
-
-
to get value from big data, business analysts cannot simply be presented with a programming language
-
Analyst 1.5 is characterized by a disconnect between data scientists and the tools and systems in the more complex camp of programmers and computer scientists
-
caused data to be totally fragmented
-
Analyst 2.0 will have arrived when vendors and IT make analysis easy enough that a typical business user can conduct analysis entirely by themselves
-
Tools such as self-learning recommendations engines
-
demands new skills, such as a more precise focus on aberrant or statistically significant data in a stream, as well as better tools
-
somehow at some point you have to get your analytical inspection down to the equivalent of code level
-
what we’re trying to model is every person’s brain–at least the part of the brain that decides how to shop, when to shop, and what you want
-
we need to continue to mine for behavioral data, such as what people looked at before and after they made transactions
-
among the top pitfalls is the tendency to focus on a very small piece of data without occasionally stepping back
-
tendency to over-focus on technology
-
organizations are tempted to put the most technology-savvy person on the job, rather than the most business-savvy
-
computer scientists are not trained to ask the right business questions
Continue reading "Link roundup, week 31/2012"
Sunday, July 29. 2012
-
tags: machinelearning books
-
Information Theory, Inference, and Learning Algorithms
-
640 pages, Published September 2003
-
PDF (A4) pdf (9M) (fourth printing, March 2005)
-
HTML
tags: r statistics programming
-
formerly named R Cookbook
-
It is not related to Paul Teetor’s excellent R Cookbook
-
When are we done defining big data? p.1
tags: datascience bigdata opinion
-
settled on 3 Vs -- volume, variety, and velocity
-
-
if big data is understood solely on the basis of these trends, it isn’t clear that it’s at all hype-worthy
-
if “big data” simply describes the volume, variety, and velocity of the information that constitutes it, our existing data management practices are still arguably up to the task
-
big data is hyped on the basis of its real or imagined outputs
-
a lot more interesting when you bring in ‘V’ for value
-
When are we done defining data science?
tags: datascience statistics opinion
-
the skills of a “data scientist” are those of a modern statistician
-
know how to move data around and manipulate data with some programming language
-
know how to draw informative pictures of data
-
Knowledge of stats, errorbars, confidence intervals
-
try to get people from different backgrounds
-
Great communication skills
-
a lot of what we teach The Kids now looks a lot more like machine learning than statistics as it was taught circa 1970, or even circa 1980
-
Everything I know about statistics I’ve learned without formal instruction
-
is not, in my experience, intrinsically hard for anyone who already has a decent grounding in some other mathematical science
-
mastering them really does mean trying to do things and failing
-
potentially hazardous. This is the idea that all that really matters is being “smart”
-
counter-productive for students to attribute their success or failure in learning about something to an innate talent
-
Bill Franks is Chief Analytics Officer at Teradata
tags: datascience bigdata books
-
tags: datascience opinion bigdata
-
Some folks like to confuse Hadoop with big data
-
Focus On the Questions To Ask, Not The Answers
-
The failure of data warehouses to provide real-time data led to the creation of data marts
-
Data marts failed to provide complete and updated and comprehensive views
-
existing solutions still don’t solve the problem. Why? The market and business environment have changed
-
Data moves from structured to unstructured. Sources exponentially proliferate. Data quality is paramount.
-
Real-time is irrelevant because speed does not trump fidelity. Quantity does not trump quality
-
Business questions remained unanswered despite the massive number of reports and views and charts
-
The big shift is about moving from data to decisions
-
tags: machinelearning video lectures
-
Draft videos (editing incomplete)
-
Entropy and Data Compression
-
Shannon’s Source Coding Theorem
-
Inference and Information Measures for Noisy Channels
-
Introduction to Bayesian Inference
-
Approximating Probability Distributions
-
-
-
-
-
Posted from Diigo. The rest of my favorite links are here.
Friday, July 20. 2012
It took me quite a long time to discover that my favorite knowledge management tool, Diigo, provides a feature to post one’s bookmarks to a blog. As I often had the desire to repost certain links I stumbled upon, I will do that occasionally from now on, mainly about everything from the topic pool of data mining (and related buzzwords), with flavors ranging from theory to applications, from technology to business. (I can’t really do that to social media sites, as it’s almost impossible to explicitly consume posts topic-wise. So, blogs aren’t really obsolete—yet.)
Btw, Diigo is really awesome: You can highlight text on webpages and add annotations to help understanding an article and create a summary on the fly, right while going through it. In this sense: If you want to be briefed, read at least this. (And don’t worry, the next episodes will contain less content; this one ranges back a few weeks.)
-
tags: computerscience datascience technology software
-
GraphChi, exploits the capacious hard drives
-
a Mac Mini running GraphChi can analyze Twitter’s social graph from 2010—which contains 40 million users and 1.2 billion connections—in 59 minutes
-
The previous published result on this problem took 400 minutes using a cluster of about 1,000 computers
-
graph computation is becoming more and more relevant
-
GraphChi is capable of effectively handling many large-scale graph-computing problems without resorting to cloud-based solutions or supercomputers
-
Google’s Percolator paper
tags: google datascience bigdata technology
-
-
MapReduce and other batch-processing systems cannot process small updates individually
-
Percolator, a system for incrementally processing updates to a large data set
-
tags: bigdata datascience technology opinion
-
das Thema Big Data noch in einem frühen Stadium
-
noch in der Analyse- und Planungsphase
-
Verfügbarkeit neuer Analyse- und Datenbanktechnologien
-
dynamische Zunahme des unternehmensinternen Datenverkehrs
-
Big Data vielfach ‘durch die Hintertür’ ins Unternehmen
-
Datenwachstum von 42 Prozent bis zum Ende des Jahres 2014
-
auf Seiten der Storage-Infrastruktur eine Menge Arbeit
-
mittelständischen (500-999 Mitarbeiter) und den Großunternehmen (ab 1.000 Mitarbeiter)
-
Über ein Drittel erwarten sich Kosteneinsparungen. Fast die Hälfte verspricht sich bessere Einsichten in das Informations- und Konsumverhalten der Kunden
-
hohen Erwartungen, die an Dienstleister und Lösungsanbieter gestellt werden
-
tags: datascience bigdata cloud technology opinion
-
it has become synonymous with big data
-
-
Is the enterprise buying into a technology whose best day has already passed?
-
Hadoop’s inspiration – Google’s MapReduce
-
-
make big data processing approachable to Google’s typical user/developer
-
Hadoop Distributed File System and Hadoop MapReduce — was born in the image of GFS and GMR
-
Your code is turned into map and reduce jobs, and Hadoop runs those jobs for you
-
Google evolved. Can Hadoop catch up?
-
GMR no longer holds such prominence in the Google stack
-
Here are technologies that I hope will ultimately seed the post-Hadoop era
-
it will require new, non-MapReduce-based architectures that leverage the Hadoop core (HDFS and Zookeeper) to truly compete with Google
-
Percolator for incremental indexing and analysis of frequently changing datasets
-
each time you want to analyze the data (say after adding, modifying or deleting data) you have to stream over the entire dataset
-
displacing GMR in favor of an incremental processing engine called Percolator
-
dealing only with new, modified, or deleted documents
-
Dremel for ad hoc analytics
-
-
many interface layers have been built
-
purpose-built for organized data processing (jobs). It is baked from the core for workflows, not ad hoc exploration
-
BI/analytics queries are fundamentally ad hoc, interactive, low-latency
-
-
I’m not aware of any compelling open source alternatives to Dremel
-
Pregel for analyzing graph data
-
certain core assumptions of MapReduce are at fundamental odds with analyzing networks of people, telecommunications equipment, documents and other
-
petabyte -scale graph processing on distributed commodity machines
-
Hadoop, which often causes exponential data amplification in graph processing
-
execute graph algorithms such as SSSP or PageRank in dramatically shorter time
-
near linear scaling of execution time with graph size
-
the only viable option in the open source world is Giraph
-
if you’re trying to process dynamic data sets, ad-hoc analytics or graph data structures, Google’s own actions clearly demonstrate better alternatives to the MapReduce paradigm
-
Percolator, Dremel and Pregel make an impressive trio and comprise the new canon of big data
-
similar impact on IT as Google’s original big three of GFS, GMR, and BigTable
Continue reading "Link roundup, week 29/2012"
|