No Photo Available

Last Update

2015-11-17T00:00:00.000Z

This profile was last updated on .

Is this you? Claim your profile.

Get ZoomInfo Grow

+ Get 10 Free Contacts a Month

Please agree to the terms and conditions

I agree to the Terms of Service and Privacy Policy. I understand that I will receive a subscription to ZoomInfo Grow at no charge in exchange for downloading and installing the ZoomInfo Contact Contributor utility which, among other features, involves sharing my business contacts as well as headers and signature blocks from emails that I receive.

Background Information

Employment History

Beowulf

Codehack LLC

BadKarma.NET Inc

Chief Technology Officer

PBM

Founder and Distinguished Engineer

PathScale , Inc.

Chief Technology Officer

Blekko

Chief Scientist

QLogic Corporation

Senior Engineer

High Performance Technologies Inc

Affiliations

Advisory Board Member
Common Crawl

Education

BA

Math and Physics

Brandeis University

MA

Astronomy

University of Virginia

Web References (140 Total References)


Greg Lindahl's Home Page: Photos

www.pbm.com [cached]

photo gallery | images of Greg Lindahl

...
Greg Lindahl (lindahl@pbm.com)


Even the concept of cores >>> ...

www.beowulf.org [cached]

Even the concept of cores >>> themselves are only six or seven years old, before then a CPU was just a >>> CPU and you would refer to "a N CPU cluster". >>> >> >> and to be on the safe side (wrt forms of simultaneous multi-threading), >> we should probably try to use "thread" instead. meaning a single hardware >> execution context. >> _______________________________________________ >> Beowulf mailing list, Beowulf@beowulf.org sponsored by Penguin Computing >> To change your subscription (digest mode or unsubscribe) visit >> http://www.beowulf.org/mailman/listinfo/beowulf >> >> > -- Jonathan Aquilina From lindahl at pbm.com Sat May 2 16:29:23 2009 From: lindahl at pbm.com (Greg Lindahl) Date: Wed Aug 18 01:08:43 2010 Subject: [Beowulf] 1 multicore machine cluster In-Reply-To:

...
If someone uses "nodes" when talking about a Linux cluster, the meaning is pretty clear, even if the boxes are NUMA. -- greg From lindahl at pbm.com Sat May 2 16:42:15 2009 From: lindahl at pbm.com (Greg Lindahl) Date: Wed Aug 18 01:08:43 2010 Subject: [Beowulf] newbie In-Reply-To: References: Message-ID: > [Intel and Shanghai] > > Is this deliberate? > > In the sense that they have no desire to support > competitors hardware, yes.
...
From gus at ldeo.columbia.edu Sun May 3 11:16:31 2009 From: gus at ldeo.columbia.edu (Gus Correa) Date: Wed Aug 18 01:08:43 2010 Subject: [Beowulf] newbie In-Reply-To: References: Message-ID: Thank you Chris, Bill, Greg, and Joe.
...
wrote: > Thank you Chris, Bill, Greg, and Joe.
...
wrote: > >> Thank you Chris, Bill, Greg, and Joe. > > No worries! > >> This is gone: >> >> http://www.swallowtail.org/naughty-intel.html > > ...but not forgotten... > > http://web.archive.org/web/20071022045356/http://www.swallowtail.org/naughty-intel.html > Aussieome!
...
On Sun, May 3, 2009 at 5:12 AM, Greg Lindahl
...
From lindahl at pbm.com Thu May 7 14:49:18 2009 From: lindahl at pbm.com (Greg Lindahl) Date: Wed Aug 18 01:08:43 2010 Subject: [Beowulf] newbie In-Reply-To:
...
Nothing scares me more than a PhD oceanographer with 2 formal courses in Fortran in his educational summary... gerry From lindahl at pbm.com Thu May 7 16:19:22 2009 From: lindahl at pbm.com (Greg Lindahl) Date: Wed Aug 18 01:08:43 2010 Subject: [Beowulf] Beowulf SysAdmin Job Description In-Reply-To: References:
...
Message-ID: On Thu, May 07, 2009 at 04:19:22PM -0700, Greg Lindahl wrote: > On Thu, May 07, 2009 at 06:10:53PM -0500, Gerald Creager wrote: > > > I think I've said this before here, but I'll risk it again...
...
PS. The Athelon was my typo, earlier sorry! -- Rahul From lindahl at pbm.com Mon May 11 13:32:43 2009 From: lindahl at pbm.com (Greg Lindahl) Date: Wed Aug 18 01:08:44 2010 Subject: [Beowulf] evaluating FLOPS capacity of our cluster In-Reply-To:
...
Message-ID: On Mon, May 11, 2009 at 02:30:31PM -0400, Mark Hahn wrote: > 80 is fairly high, and generally requires a high-bw, low-lat net. > gigabit, for instance, is normally noticably lower, often not much > better than 50%. but yes, top500 linpack is basically just > interconnect factor * peak, and so unlike real programs... Don't forget that it depends significantly on memory size. -- greg From richard.walsh at comcast.net Mon May 11 13:50:20 2009 From: richard.walsh at comcast.net (richard.walsh@comcast.net) Date: Wed Aug 18 01:08:44 2010 Subject: [Beowulf] evaluating FLOPS capacity of our cluster In-Reply-To: Message-ID: >----- Original Message ----- >From: "Greg Lindahl"
...
Greg is also right on the memory size being a factor allowing larger N to be used for HPL.
...
Message-ID: Hi Tom, Greg, Rahul, list Tom Elken wrote: >> On Behalf Of Rahul Nabar >> >> Rmax/Rpeak= 0.83 seems a good guess based on one very similar system >> on the Top500. >> >> Thus I come up with a number of around 1.34 TeraFLOPS for my cluster >> of 24 servers.
...
Nothing too >> accurate but I do not want to be an order of magnitude off. [maybe a >> decimal mistake in math! ] > > You're in the right ballpark. > I recently got 0.245 Tflops on HPL on a 4-node version of what you have > (with Goto BLAS), so 6x that # is in the same ballpark as your > 1.34 TF/s estimate. > My CPUs were 2.3 GHz Opteron 2356 instead of your 2.2 GHz. > > Greg is also right on the memory size being a factor allowing larger N > to be used for HPL. > I used a pretty small N on this HPL run since we were running it > as part of a HPC Challenge suite run, > and a smaller N can be better for PTRANS if you are interested > in the non-HPL parts of HPCC (as I was). > I have 16GB/node, the maximum possible is 128GB for this motherboard.
...
Ashley, From lindahl at pbm.com Mon May 11 15:28:14 2009 From: lindahl at pbm.com (Greg Lindahl) Date: Wed Aug 18 01:08:44 2010 Subject: [Beowulf] evaluating FLOPS capacity of our cluster In-Reply-To: References:
...
Message-ID: Greg Lindahl wrote: > On Mon, May 11, 2009 at 05:56:43PM -0400, Gus Correa wrote: > >> However, here is somebody that did an experiment with increasing >> values of N, and his results suggest that performance increases >> logarithmically with problem size (N), not linearly, >> saturating when you get closer to the maximum possible for your >> current memory size. > > This is well-known.
...
Imagining the nodes had 128GB, N=554,000, what is your guess for Rmax/Rpeak? (YMMV is not an answer! :) ) Many thanks, Gus Correa PSs - The problem with this HPL thing is that it becomes addictive, and I need to go do some work, production, not tests, tests, tests ... --------------------------------------------------------------------- Gustavo Correa Lamont-Doherty Earth Observatory - Columbia University Palisades, NY, 10964-8000 - USA --------------------------------------------------------------------- From lindahl at pbm.com Mon May 11 16:03:53 2009 From: lindahl at pbm.com (Greg Lindahl) Date: Wed Aug 18 01:08:44 2010 Subject: [Beowulf] evaluating FLOPS capacity of our cluster In-Reply-To: References:
...
Message-ID: On Thu, May 07, 2009 at 02:49:18PM -0700, Greg Lindahl wrote: > On Thu, May 07, 2009 at 05:43:02PM -0400, Mark Hahn wrote: > >> Probably AMD had been thinking hard on this and decided to make compilers at > >> last. http://developer.amd.com/cpu/open64/pages/default.aspx > > > > interesting. it wasn't obvious at a glance how this actually differed > > from gcc. besides a comparison on real code, it would > > be interesting to know the reason (political, practical?) > > what keeps open64 from simply contributing to mainstream gcc. > > is there some conflicting infrastructure?
...
DDR2 memory subsystem. ?Does a socket upgrade make since with the? boards that you?already have? ?This is the choice that AMD hopes you? will make. ?Intel, on the other hand,?wants to look at the Xeon 5500's? performance and power management, and go for the forklift ?upgrade. rbw -- Rahul _______________________________________________ Beowulf mailing list, Beowulf@beowulf.org sponsored by Penguin Computing To change your subscription (digest mode or unsubscribe) visit http://www.beowulf.org/mailman/listinfo/beowulf -------------- next part -------------- An HTML attachment was scrubbed... URL: http://www.scyld.com/pipermail/beowulf/attachments/20090512/7ddc05a9/attachment.html From Greg at keller.net Tue May 12 17:05:41 2009 From: Greg at keller.net (Greg Keller) Date: Wed Aug 18 01:08:44 2010 Subject: [Beowulf] recommendations for cluster upgrades In-Reply-To: References: Message-ID: Rahul, > > I'm currently shopping around for a cluster-expansion and was shopping > for options.
...
From lindahl at pbm.com Thu May 14 15:26:33 2009 From: lindahl at pbm.com (Greg Lindahl) Date: Wed Aug 18 01:08:45 2010 Subject: [Beowulf] Should I go for diskless or not?
...
I think it was a MadTV skit. -- Joseph Landman, Ph.D Founder and CEO Scalable Informatics LLC, email: landman@scalableinformatics.com web : http://www.scalableinformatics.com http://jackrabbit.scalableinformatics.com phone: +1 734 786 8423 x121 fax : +1 866 888 3112 cell : +1 734 612 4615 From lindahl at pbm.com Fri May 15 08:58:15 2009 From: lindahl at pbm.com (Greg Lindahl) Date: Wed Aug 18 01:08:45 2010 Subject: [Beowulf] Should I go for diskless or not?
...
Best regards, Tiago Marques > > > -- > Rahul > -------------- next part -------------- An HTML attachment was scrubbed... URL: http://www.scyld.com/pipermail/beowulf/attachments/20090517/e2dfe6f6/attachment.html From lindahl at pbm.com Sat May 16 18:37:14 2009 From: lindahl at pbm.com (Greg Lindahl) Date: Wed Aug 18 01:08:45 2010 Subject: [Beowulf] recommendations for cluster upgrades In-Reply-To:
...
On Sat, May 16, 2009 at 8:37 PM, Greg Lindahl
...
Message-ID: On Sat, May 16, 2009 at 08:52:07PM -0500, Rahul Nabar wrote: > On Sat, May 16, 2009 at 8:37 PM, Greg Lindahl
...
From Greg at keller.net Tue May 26 11:10:04 2009 From: Greg at keller.net (Greg Keller) Date: Wed Aug 18 01:08:47 2010 Subject: [Beowulf] Station wagon full of tapes In-Reply-To: References: Message-ID: On May 26, 2009, at 10:20 AM, "Robert G. Brown"
...
From lindahl at pbm.com Tue May 26 15:14:57 2009 From: lindahl at pbm.com (Greg Lindahl) Date: Wed Aug 18 01:08:47 2010 Subject: [Beowulf] Station wagon full of tapes In-Reply-To: References: Message-ID: On Tue, May 26, 2009 at 09:54:15AM -0400, Chris Dagdigian wrote: > - Once we process the data to get the derived results, the primary data > just needs to go somewhere cheap If you only rarely re-read the primary data, I'd think a stack of SATA drives in a cabinet would probably do the trick.
...
Finally, note that this amount of data is considered small


Even the concept of cores >>> ...

ftp.beowulf.org [cached]

Even the concept of cores >>> themselves are only six or seven years old, before then a CPU was just a >>> CPU and you would refer to "a N CPU cluster". >>> >> >> and to be on the safe side (wrt forms of simultaneous multi-threading), >> we should probably try to use "thread" instead. meaning a single hardware >> execution context. >> _______________________________________________ >> Beowulf mailing list, Beowulf@beowulf.org sponsored by Penguin Computing >> To change your subscription (digest mode or unsubscribe) visit >> http://www.beowulf.org/mailman/listinfo/beowulf >> >> > -- Jonathan Aquilina From lindahl at pbm.com Sat May 2 16:29:23 2009 From: lindahl at pbm.com (Greg Lindahl) Date: Tue Aug 17 01:08:45 2010 Subject: [Beowulf] 1 multicore machine cluster In-Reply-To:

...
If someone uses "nodes" when talking about a Linux cluster, the meaning is pretty clear, even if the boxes are NUMA. -- greg From lindahl at pbm.com Sat May 2 16:42:15 2009 From: lindahl at pbm.com (Greg Lindahl) Date: Tue Aug 17 01:08:45 2010 Subject: [Beowulf] newbie In-Reply-To: References: Message-ID: > [Intel and Shanghai] > > Is this deliberate? > > In the sense that they have no desire to support > competitors hardware, yes.
...
From gus at ldeo.columbia.edu Sun May 3 11:16:31 2009 From: gus at ldeo.columbia.edu (Gus Correa) Date: Tue Aug 17 01:08:45 2010 Subject: [Beowulf] newbie In-Reply-To: References: Message-ID: Thank you Chris, Bill, Greg, and Joe.
...
wrote: > Thank you Chris, Bill, Greg, and Joe.
...
wrote: > >> Thank you Chris, Bill, Greg, and Joe. > > No worries! > >> This is gone: >> >> http://www.swallowtail.org/naughty-intel.html > > ...but not forgotten... > > http://web.archive.org/web/20071022045356/http://www.swallowtail.org/naughty-intel.html > Aussieome!
...
On Sun, May 3, 2009 at 5:12 AM, Greg Lindahl
...
From lindahl at pbm.com Thu May 7 14:49:18 2009 From: lindahl at pbm.com (Greg Lindahl) Date: Tue Aug 17 01:08:45 2010 Subject: [Beowulf] newbie In-Reply-To:
...
Nothing scares me more than a PhD oceanographer with 2 formal courses in Fortran in his educational summary... gerry From lindahl at pbm.com Thu May 7 16:19:22 2009 From: lindahl at pbm.com (Greg Lindahl) Date: Tue Aug 17 01:08:45 2010 Subject: [Beowulf] Beowulf SysAdmin Job Description In-Reply-To: References:
...
Message-ID: On Thu, May 07, 2009 at 04:19:22PM -0700, Greg Lindahl wrote: > On Thu, May 07, 2009 at 06:10:53PM -0500, Gerald Creager wrote: > > > I think I've said this before here, but I'll risk it again...
...
PS. The Athelon was my typo, earlier sorry! -- Rahul From lindahl at pbm.com Mon May 11 13:32:43 2009 From: lindahl at pbm.com (Greg Lindahl) Date: Tue Aug 17 01:08:46 2010 Subject: [Beowulf] evaluating FLOPS capacity of our cluster In-Reply-To:
...
Message-ID: On Mon, May 11, 2009 at 02:30:31PM -0400, Mark Hahn wrote: > 80 is fairly high, and generally requires a high-bw, low-lat net. > gigabit, for instance, is normally noticably lower, often not much > better than 50%. but yes, top500 linpack is basically just > interconnect factor * peak, and so unlike real programs... Don't forget that it depends significantly on memory size. -- greg From richard.walsh at comcast.net Mon May 11 13:50:20 2009 From: richard.walsh at comcast.net (richard.walsh@comcast.net) Date: Tue Aug 17 01:08:46 2010 Subject: [Beowulf] evaluating FLOPS capacity of our cluster In-Reply-To: Message-ID: >----- Original Message ----- >From: "Greg Lindahl"
...
Greg is also right on the memory size being a factor allowing larger N to be used for HPL.
...
Message-ID: Hi Tom, Greg, Rahul, list Tom Elken wrote: >> On Behalf Of Rahul Nabar >> >> Rmax/Rpeak= 0.83 seems a good guess based on one very similar system >> on the Top500. >> >> Thus I come up with a number of around 1.34 TeraFLOPS for my cluster >> of 24 servers.
...
Nothing too >> accurate but I do not want to be an order of magnitude off. [maybe a >> decimal mistake in math! ] > > You're in the right ballpark. > I recently got 0.245 Tflops on HPL on a 4-node version of what you have > (with Goto BLAS), so 6x that # is in the same ballpark as your > 1.34 TF/s estimate. > My CPUs were 2.3 GHz Opteron 2356 instead of your 2.2 GHz. > > Greg is also right on the memory size being a factor allowing larger N > to be used for HPL. > I used a pretty small N on this HPL run since we were running it > as part of a HPC Challenge suite run, > and a smaller N can be better for PTRANS if you are interested > in the non-HPL parts of HPCC (as I was). > I have 16GB/node, the maximum possible is 128GB for this motherboard.
...
Ashley, From lindahl at pbm.com Mon May 11 15:28:14 2009 From: lindahl at pbm.com (Greg Lindahl) Date: Tue Aug 17 01:08:46 2010 Subject: [Beowulf] evaluating FLOPS capacity of our cluster In-Reply-To: References:
...
Message-ID: Greg Lindahl wrote: > On Mon, May 11, 2009 at 05:56:43PM -0400, Gus Correa wrote: > >> However, here is somebody that did an experiment with increasing >> values of N, and his results suggest that performance increases >> logarithmically with problem size (N), not linearly, >> saturating when you get closer to the maximum possible for your >> current memory size. > > This is well-known.
...
Imagining the nodes had 128GB, N=554,000, what is your guess for Rmax/Rpeak? (YMMV is not an answer! :) ) Many thanks, Gus Correa PSs - The problem with this HPL thing is that it becomes addictive, and I need to go do some work, production, not tests, tests, tests ... --------------------------------------------------------------------- Gustavo Correa Lamont-Doherty Earth Observatory - Columbia University Palisades, NY, 10964-8000 - USA --------------------------------------------------------------------- From lindahl at pbm.com Mon May 11 16:03:53 2009 From: lindahl at pbm.com (Greg Lindahl) Date: Tue Aug 17 01:08:46 2010 Subject: [Beowulf] evaluating FLOPS capacity of our cluster In-Reply-To: References:
...
Message-ID: On Thu, May 07, 2009 at 02:49:18PM -0700, Greg Lindahl wrote: > On Thu, May 07, 2009 at 05:43:02PM -0400, Mark Hahn wrote: > >> Probably AMD had been thinking hard on this and decided to make compilers at > >> last. http://developer.amd.com/cpu/open64/pages/default.aspx > > > > interesting. it wasn't obvious at a glance how this actually differed > > from gcc. besides a comparison on real code, it would > > be interesting to know the reason (political, practical?) > > what keeps open64 from simply contributing to mainstream gcc. > > is there some conflicting infrastructure?
...
DDR2 memory subsystem. ?Does a socket upgrade make since with the? boards that you?already have? ?This is the choice that AMD hopes you? will make. ?Intel, on the other hand,?wants to look at the Xeon 5500's? performance and power management, and go for the forklift ?upgrade. rbw -- Rahul _______________________________________________ Beowulf mailing list, Beowulf@beowulf.org sponsored by Penguin Computing To change your subscription (digest mode or unsubscribe) visit http://www.beowulf.org/mailman/listinfo/beowulf -------------- next part -------------- An HTML attachment was scrubbed... URL: http://www.scyld.com/pipermail/beowulf/attachments/20090512/7ddc05a9/attachment.html From Greg at keller.net Tue May 12 17:05:41 2009 From: Greg at keller.net (Greg Keller) Date: Tue Aug 17 01:08:46 2010 Subject: [Beowulf] recommendations for cluster upgrades In-Reply-To: References: Message-ID: Rahul, > > I'm currently shopping around for a cluster-expansion and was shopping > for options.
...
From lindahl at pbm.com Thu May 14 15:26:33 2009 From: lindahl at pbm.com (Greg Lindahl) Date: Tue Aug 17 01:08:48 2010 Subject: [Beowulf] Should I go for diskless or not?
...
I think it was a MadTV skit. -- Joseph Landman, Ph.D Founder and CEO Scalable Informatics LLC, email: landman@scalableinformatics.com web : http://www.scalableinformatics.com http://jackrabbit.scalableinformatics.com phone: +1 734 786 8423 x121 fax : +1 866 888 3112 cell : +1 734 612 4615 From lindahl at pbm.com Fri May 15 08:58:15 2009 From: lindahl at pbm.com (Greg Lindahl) Date: Tue Aug 17 01:08:48 2010 Subject: [Beowulf] Should I go for diskless or not?
...
Best regards, Tiago Marques > > > -- > Rahul > -------------- next part -------------- An HTML attachment was scrubbed... URL: http://www.scyld.com/pipermail/beowulf/attachments/20090517/e2dfe6f6/attachment.html From lindahl at pbm.com Sat May 16 18:37:14 2009 From: lindahl at pbm.com (Greg Lindahl) Date: Tue Aug 17 01:08:48 2010 Subject: [Beowulf] recommendations for cluster upgrades In-Reply-To:
...
On Sat, May 16, 2009 at 8:37 PM, Greg Lindahl
...
Message-ID: On Sat, May 16, 2009 at 08:52:07PM -0500, Rahul Nabar wrote: > On Sat, May 16, 2009 at 8:37 PM, Greg Lindahl
...
From Greg at keller.net Tue May 26 11:10:04 2009 From: Greg at keller.net (Greg Keller) Date: Tue Aug 17 01:08:49 2010 Subject: [Beowulf] Station wagon full of tapes In-Reply-To: References: Message-ID: On May 26, 2009, at 10:20 AM, "Robert G. Brown"
...
From lindahl at pbm.com Tue May 26 15:14:57 2009 From: lindahl at pbm.com (Greg Lindahl) Date: Tue Aug 17 01:08:49 2010 Subject: [Beowulf] Station wagon full of tapes In-Reply-To: References: Message-ID: On Tue, May 26, 2009 at 09:54:15AM -0400, Chris Dagdigian wrote: > - Once we process the data to get the derived results, the primary data > just needs to go somewhere cheap If you only rarely re-read the primary data, I'd think a stack of SATA drives in a cabinet would probably do the trick.
...
Finally, note that this amount of data is considered small


Olympia G2 mailing list: thanks

www.pbm.com [cached]

Greg Lindahl ( lindahl@pbm.com)


Olympia G2 mailing list: RE: your mail

www.pbm.com [cached]

From: Greg Lindahl [ mailto:lindahl@pbm.com]

...
From: Greg Lindahl [ mailto:lindahl@pbm.com]

Similar Profiles

Other People with this Name

Other people with the name Lindahl

Mattias Lindahl
Tele2 AB

Jonathan Lindahl
Landsel Title Agency Inc

Lisa Lindahl
PCM Inc

Ryan Lindahl
C.H. Robinson Worldwide , Inc.

Jane Lindahl
Mayo Clinic

City Directory Icon

Browse ZoomInfo's Business Contact Directory by City

People Directory Icon

Browse ZoomInfo's
Business People Directory

Company Directory Icon

Browse ZoomInfo's
Advanced Company Directory