Skip to main content

Novinky z projektov

Low available work.

SETI@Home - So, 18.01.2020 - 20:51
For a couple of reasons, the result table has grown to the point where it no longer fits in main memory. That has been slowing the validators and assimilators, which is causing the result table to grow further.

We'd like to get it down to a manageable size before our Tuesday outage. To that end we are throttling work generation to a rate at which the table size is shrinking. We hope that this rate will increase as the table gets smaller.

So for the next few days work will be hard to come by (but not zero).
Kategórie: Novinky z projektov

New Runs for MilkyWay@home N-Body (1/15/2020)

Milkyway@Home - Št, 16.01.2020 - 17:41
Happy New Year everyone!

I hope you all had a fantastic new year's celebration. To start off the new year, I thought we should put up a few new runs for Nbody as the previous one have converged nicely. The new runs I put up are:

Thank you all for your continued support, and I wish you all the best for the new decade.

Kategórie: Novinky z projektov

Why do people run SETI@home?

SETI@Home - Št, 16.01.2020 - 05:49
Check out Stars in Their Eyes?, a research paper from the University of Geneva about why people run SETI@home.
Kategórie: Novinky z projektov

GFN-524288 Mega Prime!

PrimeGrid - Po, 13.01.2020 - 18:29
On 24 December 2019, 08:20:15 UTC, PrimeGrid's Generalized Fermat Prime Search found the Mega Prime: 3214654^524288+1 The prime is 3,411,613 digits long and enters Chris Caldwell's The Largest Known Primes Database ranked 3rd for Generalized Fermat primes and 30th overall. The discovery was made by Alen Kecic (Freezing)of Germany using a GeForce GTX 1660 Ti in an Intel(R) Core(TM) i7-7820X CPU @ 3.60GHz with 32GB RAM, running Microsoft Windows 10 Professional x64 Edition. This GPU took about 51 minutes to complete the probable prime (PRP) test using GeneferOCL5. Alen is a member of the SETI.Germany Team. The PRP was verified on 24 December 2019, 10:12:18 UTC by John Holmes (John J. Holmes) of the United States using a GeForce GTX 970 in an Intel(R) Core(TM) i7-4790 CPU @ 3.60GHz with 16GB RAM, running Microsoft Windows 10 Professional x64 Edition. This computer took about 2 hours, 4 minutes to complete the probable prime (PRP) test using GeneferOCL3. The PRP was confirmed prime by an Intel(R) Core(TM) i7-7700K CPU @ 4.20GHz with 32GB RAM, running Microsoft Windows 10 Professional x64 Edition. This computer took about 1 day, 1 hour, 45 minutes to complete the primality test using multithreaded LLR. For more details, please see the official announcement.
Kategórie: Novinky z projektov

ESP Mega Prime!

PrimeGrid - Po, 13.01.2020 - 14:50
On 24 December 2019, 01:28:13 UTC, PrimeGrid's Extended Sierpinski Problem found the Mega Prime: 99739*2^14019102+1 The prime is 4,220,176 digits long and will enter Chris Caldwell's The Largest Known Primes Database ranked 20th overall. This find eliminates k=99739; 9 k's remain in the Extended Sierpinski Problem. The discovery was made by Brian D. Niegocki (Penguin) of the United States using an Intel(R) Xeon(R) Gold 6140 CPU @ 2.30GHz with 1GB RAM, running Linux Ubuntu. This computer took about 14 hours, 14 minutes to complete the primality test using LLR. Brian is a member of the Antarctic Crunchers team. The prime was verified on 24 December 2019, 04:37:31 UTC by Pavel Atnashev (Pavel Atnashev) of Russia using an Intel(R) Xeon(R) E5-2680 v2 @ 2.80GHz with 8GB RAM, running Linux. This computer took about 4 hours, 6 minutes to complete the primality test using LLR. Pavel is a member of the Ural Federal University team. For more details, please see the official announcement.
Kategórie: Novinky z projektov

Thanks for supporting SixTrack at LHC@Home and updates

LHC@home - Ut, 23.01.2018 - 19:08
Dear volunteers,

All members of the SixTrack team would like to thank each of you for supporting our project at LHC@Home. The last weeks saw a significant increase in work load, and your constant help did not pause even during the Christmas holidays, which is something that we really appreciate!

As you know, we are interested in simulating the dynamics of the beam in ultra-relativistic storage rings, like the LHC. As in other fields of physics, the dynamics is complex, and it can be decomposed into a linear and a non-linear part. The former allows the expected performance of the machine to be at reach, whereas the latter might dramatically affect the stability of the circulating beam. While the former can be analysed with the computing power of a laptop, the latter requires BOINC, and hence you! In fact, we perform very large scans of parameter spaces to see how non-linearities affect the motion of beam particles in different regions of the beam phase space and for different values of key machine parameters. Our main observable is the dynamic aperture (DA), i.e. the boundary between stable, i.e. bounded, and unstable, i.e., unbounded, motion of particles.

The studies mainly target the LHC and its upgrade in luminosity, the so-called HL-LHC. Thanks to this new accelerator, by ~2035, the LHC will be able to deliver to experiments x10 more data than what is foreseen in the first 10/15y of operation of LHC in a comparable time. We are in full swing in designing the upgraded machine, and the present operation of the LHC is a unique occasion to benchmark our models and simulation results. The deep knowledge of the DA of the LHC is essential to properly tune the working point of the HL-LHC.

If you have crunched simulations named "workspace1_hl13_collision_scan_*" (Frederik), then you have helped us in mapping the effects of unavoidable magnetic errors expected from the new hardware of the HL-LHC on dynamic aperture, and identify the best working point of the machine and correction strategies. Tasks named like "w2_hllhc10_sqz700_Qinj_chr20_w2*" (Yuri) focus the attention onto the magnets responsible for squeezing the beams before colliding them; due to their prominent role, these magnets, very few in number, have such a big impact on the non-linear dynamics that the knobs controlling the linear part of the machine can offer relevant remedial strategies.

Many recent tasks are aimed at relating the beam lifetime to the dynamic aperture. The beam lifetime is a measured quantity that tells us how long the beams are going to stay in the machine, based on the current rate of losses. A theoretical model relating beam lifetime and dynamic aperture was developed; a large simulation campaign has started, to benchmark the model against plenty of measurements taken with the LHC in the past three years. One set of studies, named "w16_ats2017_b2_qp_0_ats2017_b2_QP_0_IOCT_0" (Pascal), considers as main source of non-linearities the unavoidable multipolar errors of the magnets, whereas tasks named as "LHC_2015*" (Javier) take into account the parasitic encounters nearby the collision points, i.e. the so called "long-range beam-beam effects".

One of our users (Ewen) is carrying out two studies thanks to your help. In 2017 DA was directly measured for the first time in the LHC at top energy, and nonlinear magnets on either side of ATLAS and CMS experiments were used to vary the DA. He wants to see how well the simulated DA compares to these measurements. The second study seeks to look systematically at how the time dependence of DA in simulation depends on the strength of linear transverse coupling, and the way it is generated in the machine. In fact, some previous simulations and measurements at injection energy have indicated that linear coupling between the horizontal and vertical planes can have a large impact on how the dynamic aperture evolves over time.

In all this, your help is fundamental, since you let us carry out the simulations and studies we are interested in, running the tasks we submit to BOINC. Hence, the warmest "thank you" to you all!
Happy crunching to everyone, and stay tuned!

Alessio and Massimo, for the LHC SixTrack team.
Kategórie: Novinky z projektov

LHC@home down-time due to system updates

LHC@home - Ut, 23.01.2018 - 11:19
Tomorrow Wednesday 24/1, the LHC@home servers will be unavailable for a short period while our storage backend is taken down for a system update.

Today, Tuesday 23/1, some of the Condor servers that handle CMS, LHCb and Theory tasks will be down for a while. Regarding the on-going issues with upload of files, please refer to this thread.

Thanks for your understanding and happy crunching!
Kategórie: Novinky z projektov
Syndikovať obsah