I will introduce some ideas being developed at MIT Lincoln Labs concerning large random graphs. Examples of large random graphs include the Internet graph, the Wikipedia graph, the Facebook graph, and Biological Networks. These graphs consist of thousands or millions of randomly occurring vertices and edges and are often very different from one another.
A difficult question is: How do we detect peculiar subgraphs in such graphs? One approach is to model a graph as a background noise graph plus some signal graph. How then does one differentiate the signal from the noise?
I spent nine weeks this summer at MIT Lincoln Labs thinking about this topic.