iMechanica - Comments for "Any tips/comments regarding the latest version of the C++ library: Eigen (v. 3.0)?"
https://imechanica.org/node/9987
Comments for "Any tips/comments regarding the latest version of the C++ library: Eigen (v. 3.0)?"enobat herbal kolesterol
https://imechanica.org/comment/27139#comment-27139
<a id="comment-27139"></a>
<p><em>In reply to <a href="https://imechanica.org/node/9987">Any tips/comments regarding the latest version of the C++ library: Eigen (v. 3.0)?</a></em></p>
<div class="field field-name-comment-body field-type-text-long field-label-hidden"><div class="field-items"><div class="field-item even"><p>Penyakit Pembesaran Pada Jantung, menyerang pada jantung itu sendiri <a rel="external" href="http://bit.ly/1JrBrl5">obat herbal jantung bengkak</a>. Kolesterol tinggi kebanyakan disebabkan oleh pola makan yang tidak sehat <a rel="external" href="http://bit.ly/1FlMPQX">obat herbal kolesterol</a>. <a rel="external" href="http://ow.ly/Lqib4">www.veherba.com</a> Merupakan salah satu distributor Veherba Plus terbesar & terpercaya.</p>
</div></div></div><ul class="links inline"><li class="comment_forbidden first last"><span><a href="/user/login?destination=node/9987%23comment-form">Log in</a> or <a href="/user/register?destination=node/9987%23comment-form">register</a> to post comments</span></li>
</ul>Sat, 18 Apr 2015 07:57:30 +0000JohnWillcomment 27139 at https://imechanica.orgFurther on solvers
https://imechanica.org/comment/16480#comment-16480
<a id="comment-16480"></a>
<p><em>In reply to <a href="https://imechanica.org/node/9987">Any tips/comments regarding the latest version of the C++ library: Eigen (v. 3.0)?</a></em></p>
<div class="field field-name-comment-body field-type-text-long field-label-hidden"><div class="field-items"><div class="field-item even"><p>
Dear Thomas,
</p>
<p>
0. Thanks, once again, for a wonderfully detailed reply.</p>
<p>
<strong>1. About books:</strong> It was only after writing my request about "left-looking" etc. that it occured to me to look up in Numerical Recipes. (I have the 2/e, in C++.) I then realized how much I have to learn. ... I will complete NR and then try to go through Davis' book. Your comments will also come in handy in this learning process, I am sure.
</p>
<p>
<strong>2. The absence of comparative data:</strong> However, there is another thing I wish to note, and it is: the absence of comparative performance data for large systems, and on the common (PC) platform. Allow me to explain.
</p>
<p>
A while ago, we had a discussion at iMechanica concerning FORTRAN vs. C++, and someone mentioned that the default FORTRAN code would be faster than the default C++ one. (I tried searching for that thread, but in vain. The thread I have in mind is not this one [<a href="http://imechanica.org/node/4822" target="_blank">^</a>] or this one [<a href="http://birch.seas.harvard.edu/node/8240" target="_blank">^</a>].) With expression templates, even C++ code should be fast. But how fast(er)? One would like to have some definitive data for matrices of orders 10K+, but such data are surprisingly difficult to find.
</p>
<p>
Another thing. <strong>All</strong> solvers run <strong>very</strong> slowly as the system size grows large (matrix order > 10 k, say, per processor). The subtle differences in the algorithms will surely count. But by how much---for the large systems? By what percentage or factor? I have no idea, and <strong>none</strong> backs up the descriptions of algorithms with quantitative data....
</p>
<p>
The fact of the matter is, if very standard algorithms (such as those given in, say, NR) are not going to be slower by more than 50%, then, frankly, one wouldn't care for any advanced library. (The emphasis, again, is for large systems.) Similarly, if using a dense matrix algorithm is going to be permissible (given the routine 2GB and 4GB RAMs in ordinary desktops today), then, again, one wouldn't care going specifically for the sparse matrix algorithms.
</p>
<p>
But there is no way to get such quantitative data easily. Typically, what one runs into is something like this: "Ours is the greatest library since Backus wrote the rule-book for FORTRAN; we even use MPI." Ok, but what about the simpler shared-memory support via, say, OpenMP? Blank-out. What about GPGPU support? Blank-out. What about eigenvalue computations in addition to the linear system solution? Blank-out. What about easy compilation on the Windows platform using VC++ (and not MinGW etc.)? Mostly blank-out.
</p>
<p>Today, the fact is, in order to evaluate a library (say for a commercial application in my day-job), I have to myself download it, compile it, run test-data, and come to my own conclusions. Which leads me to the next point.</p>
<p>
<strong>3. Internet Depository for Performace Data of Solver Libraries:</strong> Can't we have an Internet depository of sorts, on the lines of UFlorida's matrix collection, where people can go and report the sort of performance data they get with different hardware and software combinations?
</p>
<p>
<strong>4. What I am going to do:</strong> Anyway, enough by way of a rant. I think over the next couple of months (or more), I will myself do something towards this idea. I will first write the dumbest (i.e. the most simple-minded) algorithms (e.g. for direct solution, the Numerical Analysis 101 type of an implementation of the Gaussian elimination), in both FORTRAN and C++, and consider their data as the common baseline for comparison. Then, I will compile the NR book algorithms, as the second baseline. Then, I will begin compiling and testing some 8 to 10 commonly available libraries. I will upload my programs and results at some place suitable (perhaps at iMechanica).
</p>
<p>
... I will surely do that, and while doing that, esp. while implementing the NR book algos, I will also begin learning about this fascinating topic of numerical algorithmics. As you can see from the above quoted past two threads at iMechanica, I have been looking around for a long enough time, and it's high-time I began wrapping it up for at least some intermediate conclusions. ... I think a dedicated blog for the comparative data would be a good idea... I will announce its creation as soon as I do it.
</p>
<p>
<br />
--Ajit
</p>
<p>
- - - - - <br />
[E&OE]
</p>
</div></div></div><ul class="links inline"><li class="comment_forbidden first last"><span><a href="/user/login?destination=node/9987%23comment-form">Log in</a> or <a href="/user/register?destination=node/9987%23comment-form">register</a> to post comments</span></li>
</ul>Tue, 29 Mar 2011 06:55:26 +0000Ajit R. Jadhavcomment 16480 at https://imechanica.orgDirect solvers
https://imechanica.org/comment/16474#comment-16474
<a id="comment-16474"></a>
<p><em>In reply to <a href="https://imechanica.org/node/9987">Any tips/comments regarding the latest version of the C++ library: Eigen (v. 3.0)?</a></em></p>
<div class="field field-name-comment-body field-type-text-long field-label-hidden"><div class="field-items"><div class="field-item even"><p>
Dear Ajit,
</p>
<p>
</p>
<p>
In fact concerning the (let say) cholesky decomposition there is 6 different manners to write the algorithm, depending if you start to work with rows or columns (basically there are 3 "for" loops). The right or left looking version are two of those. This naming comes from the fact in the right looking version (which is actually row by row) you update terms of your factorization that are after the diagonal (on the right), while the left looking (which is col by col and is the actual implementation of CSparse) you do it on the left. May be I' m wrong (I often confuse the dfferent factorizations) but the idea is close to that.
</p>
<p>
Depending on how you write the factorization algorithm (meaning: which update of the term you choose, or the order of the FOR loops) algorithms have different properties specially on how and when should one update the new computed values of the factorization. Some implementation (they are called supernodal) allow to gather some nodes and do the update work in a block fashion (from time to time, not on all the matrix though). This allows to use dense matrix algorithmic (such as dense matrices product etc) which are highly optimized by BLAS implementation. Even if those optimization impacts only a fraction of the complete factorization, the speedup may be impressive. Multifrontal solvers also allow to use the same trick, but with an other idea.
</p>
<p>
A good reference (but I find it require a lot of investment to be fully understood) is the book of Tim Davis <a href="http://www.amazon.com/Direct-Methods-Systems-Fundamentals-Algorithms/dp/0898716136">http://www.amazon.com/Direct-Methods-Systems-Fundamentals-Algorithms/dp/...</a>
</p>
<p>
As you probably know, the performance of blas, up to now, was dependent of the hardware architecture (cache property, pipelining etc) , since the libraries were using assembler to get full speedup. The automatic tuning of ATLAS changed this quite a bit, since it could get almost optimal performance on almost any machine. However the main part of eigen is devoted to create a comparable code but with SSE3 instructions which is 1) higher in the code hierarchy 2) more generic. So eigen is able to create optimized blas on any hardware supporting SSE3 instructions. that's great, but limited to dense algorihms.</p>
<p>On the other hand, I know they are developping some algorithmic for sparse matrices, but as far as I know those algorithm won't benefit much of the speedup of the dense algorithms. However I checked that one year ago, so I should check it once again before being fully affirmative on that point.</p>
<p>Best Regards,
</p>
<p>
Thomas
</p>
<p>
</p>
</div></div></div><ul class="links inline"><li class="comment_forbidden first last"><span><a href="/user/login?destination=node/9987%23comment-form">Log in</a> or <a href="/user/register?destination=node/9987%23comment-form">register</a> to post comments</span></li>
</ul>Sun, 27 Mar 2011 12:02:53 +0000tlavernecomment 16474 at https://imechanica.orgEigen 3.0 and solver libraries---Reply to Thomas
https://imechanica.org/comment/16472#comment-16472
<a id="comment-16472"></a>
<p><em>In reply to <a href="https://imechanica.org/comment/16464#comment-16464">Dear Ajit,
To me the</a></em></p>
<div class="field field-name-comment-body field-type-text-long field-label-hidden"><div class="field-items"><div class="field-item even"><p>
Dear Thomas,
</p>
<p>
<br />
Thank you very much for the clarification. ... So, I gather that since Eigen 3.0 isn't going to be multifrontal for a while, other classical blas would be better... Hmmm... </p>
<p>I am pretty much a novice to these libraries, and very much in the process of learning about them. So, could you please clarify or point out links for what is meant by "left-looking"?</p>
<p>Another point. I would like to have a little private communication with you concerning our current requirements and the capabilities of various solver libraries (both from public and commercial domains). If you don't mind, could you please drop me an email at aj175tp [at] yahoo [do t] co [d o t] in ? Thanks in advance.
</p>
<p>
--Ajit
</p>
<p>
</p>
<p>
- - - - - <br />
[E&OE]
</p>
</div></div></div><ul class="links inline"><li class="comment_forbidden first last"><span><a href="/user/login?destination=node/9987%23comment-form">Log in</a> or <a href="/user/register?destination=node/9987%23comment-form">register</a> to post comments</span></li>
</ul>Sun, 27 Mar 2011 05:18:21 +0000Ajit R. Jadhavcomment 16472 at https://imechanica.orgDear Ajit,
To me the
https://imechanica.org/comment/16464#comment-16464
<a id="comment-16464"></a>
<p><em>In reply to <a href="https://imechanica.org/node/9987">Any tips/comments regarding the latest version of the C++ library: Eigen (v. 3.0)?</a></em></p>
<div class="field field-name-comment-body field-type-text-long field-label-hidden"><div class="field-items"><div class="field-item even"><p>
Dear Ajit,
</p>
<p>
To me the added value of eigen is to allow automatic tuning of operations on dense matrices and vectors. So from that point of view it is a very promising and exciting code. As far as I know, the direct sparse solver of Eigen is not very state-of-the-art its a copy of the CSparse code of Tim Davis. I asked to the eigen team if they were considering to implement a left-looking or multifrontal direct solver that could take advantage of blas3 operation (that they optimize), buyt unfortunately they won't (at least in a near future). So to have an efficient direct solver you should still relies on other classical blas.
</p>
<p>
</p>
<p>
My blog on research on Hybrid Solvers: <a href="http://mechenjoy.blogspot.com/">http://mechenjoy.blogspot.com/</a>
</p>
</div></div></div><ul class="links inline"><li class="comment_forbidden first last"><span><a href="/user/login?destination=node/9987%23comment-form">Log in</a> or <a href="/user/register?destination=node/9987%23comment-form">register</a> to post comments</span></li>
</ul>Thu, 24 Mar 2011 14:13:35 +0000tlavernecomment 16464 at https://imechanica.org