Join the MicroCenterOfficial Folding@Home Team: #257944 - Page 12 — Micro Center

Join the MicroCenterOfficial [email protected] Team: #257944

191012141524

Comments

  • kmiller922kmiller922 Glenside, Pa ✭✭✭
    @Wommwalsh congrats on 100 Million and going 
  • MightyMayfieldMightyMayfield Mayfield Heights, OH ✭✭✭
    edited June 2020
    Well alright then. If you go on vacation, don't forget to mention to someone about monitoring things. Noted.
  • cine_chriscine_chris Powder Springs, GA ✭✭✭
    edited June 2020
    @MightyMayfield
    MCO should cross the 10 Billion PPD mark today!
    The MCO 24avg ranking is 31.
    As a team, this is how we compare: Green is active users
    Active Retention 478/1,203 or 39.7% is very good.

  • cine_chriscine_chris Powder Springs, GA ✭✭✭
    edited June 2020
    Based on [email protected] suggestions my 'summer' folder is a dual 2060 super.  I often see this system >3M PPD.  With a 75% power cap.  Total system power draw is a very modest 370 watts.  GPUs in the low 60Cs. Older Q87 mobo, i7-4770k cpu.  The skeleton chassis was scavenged from an old tower system, it's mostly repurposed gear.  Besides the GPUs, the only new stuff is case fans & heatsink, and I really didn't need the heatsink as I don't do CPU folding as it isn't power efficient.  PPD production is close to a 2080Ti and if you can get OpenBox GPUs, about 1/2 the cost!
  • RedJr55RedJr55 NEOH
    For those that may be interested, there is currently an online auction (e***, not sure if live links are acceptable here or not) for a small quantity of WATTS UP? .NET power meters at a reasonable cost. Without some type of instrumentation, it's all guesswork......
  • kmiller922kmiller922 Glenside, Pa ✭✭✭
    We have now passed the 10 Billion point mark with 18 of us now passing the 100 Million mark with 2 more close to it as well. Keep up the great work everyone. We can be sure this isn't going to just go way like we would all like.
  • KiritoKirito ✭✭
    Hey folks, having a problem here.
    I am running a Lenovo ThinkStation C30, two Xeon Processors, 64 GB of RAM.  Not running a GPU for folding.  OS is Fedora Linux running a KDE desktop.  I have 6.5 TB of Hard Drive.  The OS runs on a 256 m.2 SATA card on the motherboard.

    The problem...     FAH a7 Client cycles about once every minute or two, runs at 100% and then stops after twenty seconds or less.  Did all my updates, rebooted the machine, and the problem persists.  Moreover, this machine has been working without an issue for three weeks.  I only finished building three weeks ago.  And on top of all this the FAH Control wont run except through a browser.  I am running the 14524 Covid 19 project.

    I am running two other Linux machines with similar, less powerful setups and a Win10 machine.
    Ran out of ideas.  Any help is appreciated. 

    Kirito

  • kmiller922kmiller922 Glenside, Pa ✭✭✭
    Kirito said:
    Hey folks, having a problem here.
    I am running a Lenovo ThinkStation C30, two Xeon Processors, 64 GB of RAM.  Not running a GPU for folding.  OS is Fedora Linux running a KDE desktop.  I have 6.5 TB of Hard Drive.  The OS runs on a 256 m.2 SATA card on the motherboard.

    The problem...     FAH a7 Client cycles about once every minute or two, runs at 100% and then stops after twenty seconds or less.  Did all my updates, rebooted the machine, and the problem persists.  Moreover, this machine has been working without an issue for three weeks.  I only finished building three weeks ago.  And on top of all this the FAH Control wont run except through a browser.  I am running the 14524 Covid 19 project.

    I am running two other Linux machines with similar, less powerful setups and a Win10 machine.
    Ran out of ideas.  Any help is appreciated. 

    Kirito

    I believe there was a post on the fha fourm that will address the issue if i remember correctly
  • Kirito said:
    ...OS is Fedora Linux running a KDE desktop...
    FAH a7 Client cycles about once every minute or two, runs at 100% and then stops after twenty seconds or less.  Did all my updates, rebooted the machine, and the problem persists.  Moreover, this machine has been working without an issue for three weeks.  I only finished building three weeks ago.  And on top of all this the FAH Control wont run except through a browser...
    What version of Fedora are you running?  There seems to be a lot online about Fedora 31 and 32 using the python3 library instead of python2 and needing to change a header file:
  • KiritoKirito ✭✭
    Thank you for helping me get FAHControl operating.
    Now I have run ito this.  What am I doing wrong?

    Kirito


    15:59:43:WU00:FS00:Starting
    15:59:43:WU00:FS00:Removing old file 'work/00/logfile_01-20200612-231159.txt'
    15:59:43:WU00:FS00:Running FahCore: /usr/bin/FAHCoreWrapper /var/lib/fahclient/cores/cores.foldingathome.org/v7/lin/64bit/avx/Core_a7.fah/FahCore_a7 -dir 00 -suffix 01 -version 706 -lifeline 1528 -checkpoint 15 -np 24
    15:59:43:WU00:FS00:Started FahCore on PID 8562
    15:59:43:WU00:FS00:Core PID:8566
    15:59:43:WU00:FS00:FahCore 0xa7 started
    15:59:44:WU00:FS00:0xa7:*********************** Log Started 2020-06-14T15:59:43Z ***********************
    15:59:44:WU00:FS00:0xa7:************************** Gromacs [email protected] Core ***************************
    15:59:44:WU00:FS00:0xa7:       Type: 0xa7
    15:59:44:WU00:FS00:0xa7:       Core: Gromacs
    15:59:44:WU00:FS00:0xa7:       Args: -dir 00 -suffix 01 -version 706 -lifeline 8562 -checkpoint 15 -np
    15:59:44:WU00:FS00:0xa7:             24
    15:59:44:WU00:FS00:0xa7:************************************ CBang *************************************
    15:59:44:WU00:FS00:0xa7:       Date: Nov 5 2019
    15:59:44:WU00:FS00:0xa7:       Time: 06:06:57
    15:59:44:WU00:FS00:0xa7:   Revision: 46c96f1aa8419571d83f3e63f9c99a0d602f6da9
    15:59:44:WU00:FS00:0xa7:     Branch: master
    15:59:44:WU00:FS00:0xa7:   Compiler: GNU 8.3.0
    15:59:44:WU00:FS00:0xa7:    Options: -std=c++11 -O3 -funroll-loops -fno-pie -fPIC
    15:59:44:WU00:FS00:0xa7:   Platform: linux2 4.19.0-5-amd64
    15:59:44:WU00:FS00:0xa7:       Bits: 64
    15:59:44:WU00:FS00:0xa7:       Mode: Release
    15:59:44:WU00:FS00:0xa7:************************************ System ************************************
    15:59:44:WU00:FS00:0xa7:        CPU: Intel(R) Xeon(R) CPU E5-2620 v2 @ 2.10GHz
    15:59:44:WU00:FS00:0xa7:     CPU ID: GenuineIntel Family 6 Model 62 Stepping 4
    15:59:44:WU00:FS00:0xa7:       CPUs: 24
    15:59:44:WU00:FS00:0xa7:     Memory: 62.72GiB
    15:59:44:WU00:FS00:0xa7:Free Memory: 58.75GiB
    15:59:44:WU00:FS00:0xa7:    Threads: POSIX_THREADS
    15:59:44:WU00:FS00:0xa7: OS Version: 5.6
    15:59:44:WU00:FS00:0xa7:Has Battery: false
    15:59:44:WU00:FS00:0xa7: On Battery: false
    15:59:44:WU00:FS00:0xa7: UTC Offset: -4
    15:59:44:WU00:FS00:0xa7:        PID: 8566
    15:59:44:WU00:FS00:0xa7:        CWD: /var/lib/fahclient/work
    15:59:44:WU00:FS00:0xa7:******************************** Build - libFAH ********************************
    15:59:44:WU00:FS00:0xa7:    Version: 0.0.18
    15:59:44:WU00:FS00:0xa7:     Author: Joseph Coffland <[email protected]>
    15:59:44:WU00:FS00:0xa7:  Copyright: 2019 foldingathome.org
    15:59:44:WU00:FS00:0xa7:   Homepage: https://foldingathome.org/
    15:59:44:WU00:FS00:0xa7:       Date: Nov 5 2019
    15:59:44:WU00:FS00:0xa7:       Time: 06:13:26
    15:59:44:WU00:FS00:0xa7:   Revision: 490c9aa2957b725af319379424d5c5cb36efb656
    15:59:44:WU00:FS00:0xa7:     Branch: master
    15:59:44:WU00:FS00:0xa7:   Compiler: GNU 8.3.0
    15:59:44:WU00:FS00:0xa7:    Options: -std=c++11 -O3 -funroll-loops -fno-pie
    15:59:44:WU00:FS00:0xa7:   Platform: linux2 4.19.0-5-amd64
    15:59:44:WU00:FS00:0xa7:       Bits: 64
    15:59:44:WU00:FS00:0xa7:       Mode: Release
    15:59:44:WU00:FS00:0xa7:************************************ Build *************************************
    15:59:44:WU00:FS00:0xa7:       SIMD: avx_256
    15:59:44:WU00:FS00:0xa7:********************************************************************************
    15:59:44:WU00:FS00:0xa7:Project: 14524 (Run 333, Clone 5, Gen 22)
    15:59:44:WU00:FS00:0xa7:Unit: 0x0000002580fccb0a5e781c0de4bfe2da
    15:59:44:WU00:FS00:0xa7:Reading tar file core.xml
    15:59:44:WU00:FS00:0xa7:Reading tar file frame22.tpr
    15:59:44:WU00:FS00:0xa7:Digital signatures verified
    15:59:44:WU00:FS00:0xa7:Calling: mdrun -s frame22.tpr -o frame22.trr -x frame22.xtc -cpt 15 -nt 24
    15:59:44:WU00:FS00:0xa7:Steps: first=5500000 total=250000
    15:59:44:WU00:FS00:0xa7:ERROR:
    15:59:44:WU00:FS00:0xa7:ERROR:-------------------------------------------------------
    15:59:44:WU00:FS00:0xa7:ERROR:Program GROMACS, VERSION 5.0.4-20191026-456f0d636-unknown
    15:59:44:WU00:FS00:0xa7:ERROR:Source code file: /host/debian-stable-64bit-core-a7-avx-release/gromacs-core/build/gromacs/src/gromacs/mdlib/domdec.c, line: 6902
    15:59:44:WU00:FS00:0xa7:ERROR:
    15:59:44:WU00:FS00:0xa7:ERROR:Fatal error:
    15:59:44:WU00:FS00:0xa7:ERROR:There is no domain decomposition for 20 ranks that is compatible with the given box and a minimum cell size of 1.4227 nm
    15:59:44:WU00:FS00:0xa7:ERROR:Change the number of ranks or mdrun option -rcon or -dds or your LINCS settings
    15:59:44:WU00:FS00:0xa7:ERROR:Look in the log file for details on the domain decomposition
    15:59:44:WU00:FS00:0xa7:ERROR:For more information and tips for troubleshooting, please check the GROMACS
    15:59:44:WU00:FS00:0xa7:ERROR:website at http://www.gromacs.org/Documentation/Errors
    15:59:44:WU00:FS00:0xa7:ERROR:-------------------------------------------------------
    15:59:49:WU00:FS00:0xa7:WARNING:Unexpected exit() call
    15:59:49:WU00:FS00:0xa7:WARNING:Unexpected exit from science code
    15:59:49:WU00:FS00:0xa7:Saving result file ../logfile_01.txt
    15:59:49:WU00:FS00:0xa7:Saving result file md.log
    15:59:49:WU00:FS00:0xa7:Saving result file science.log
    15:59:49:WU00:FS00:FahCore returned: INTERRUPTED (102 = 0x66)
    16:00:34:35:127.0.0.1:New Web session
    16:00:43:WU00:FS00:Starting
    16:00:43:WU00:FS00:Removing old file 'work/00/logfile_01-20200612-231259.txt'
    16:00:43:WU00:FS00:Running FahCore: /usr/bin/FAHCoreWrapper /var/lib/fahclient/cores/cores.foldingathome.org/v7/lin/64bit/avx/Core_a7.fah/FahCore_a7 -dir 00 -suffix 01 -version 706 -lifeline 1528 -checkpoint 15 -np 24
    16:00:43:WU00:FS00:Started FahCore on PID 8920
    16:00:43:WU00:FS00:Core PID:8924
    16:00:43:WU00:FS00:FahCore 0xa7 started

  • KiritoKirito ✭✭
    The computer is running Fedora 32.  It cycles this log every minute.  The FAH Control wouldn't appear on screen and I couldn't get log data before.  The log data above came about thanks to the link you gave me Chris.  Still haven't solved the cycling problem.  Is it a problem with their GROMACS software?   How do I flush the job abd start something new to see if its just the job that is causing the problem?

    Kirito
  • kmiller922kmiller922 Glenside, Pa ✭✭✭
    edited June 2020
    I would suggest posting that log on the fourm for fah the guys there can help you a ton with it. It does look like a setting in linux that might be a issue but not 100% sure. I don't think a wu can be flushed out but it will time out and fail. you might be also able to look at the log file and check a gromacs settings against the url they have in the log http://www.gromacs.org/Documentation/Errors
  • KiritoKirito ✭✭
    Thanks kmiller, I will put the data up on the FAH forum.  I went through the Gromacs error page when I found it in the log but nothing seemed to fit the problem (of course that page hasn't been updated since 2016.)  Lets see what FAH has to say.

    Thanks!

    Kirito
  • KiritoKirito ✭✭
    Found a solution!  Apparently this particular project has issues with running 24 cores.  Had to switch it down to 12 cores at 3/4 speed and now it has been running all night.  Thanks for everyones help!

    Kirito
  • kmiller922kmiller922 Glenside, Pa ✭✭✭
    edited June 2020
    @MightyMayfield nicely done cracking the 400 mark.
  • MightyMayfieldMightyMayfield Mayfield Heights, OH ✭✭✭
    edited June 2020
    @MightyMayfield nicely done cracking the 400 mark.
    Thank you! MC45 is also about to cross into 1000 territory, too! I'm grateful for their and everyone's continued contributions. I never imagined a simple request of mine might lead to an entire army of machines sciencing their way towards a better future. I do wish hardware was a bit easier to get these days though. Large power supplies are getting slim pickings for folding rigs.
  • kmiller922kmiller922 Glenside, Pa ✭✭✭
    edited June 2020
    We really need a way to keep people motivated in this. I'm just disapointed to see the amount of support from my local store. Reason I say this is my laptop and one other gpu a 980ti I have is running around the clock, My cpu and 2080ti card are used for about 10-14 hours a day by my son since he's still home from work yet. You would think even if the store shut down a dedicated system that they have on the table with a sign, running just durning the hours of operation they would be able to do more than 35k points in a day. Just something I've taken notice to.
  • cine_chriscine_chris Powder Springs, GA ✭✭✭
    edited June 2020
    We really need a way to keep people motivated in this.
    The summer months will be difficult as most people will be paying more than double for the cost to run gear because of the cost to extract the heat generated. Of course no HVAC system is near 100% efficient, so summer PPDs can strain budgets & emotional commitments to a cause. One of the reasons I posted above about trimming watts expended and selecting gear with higher PPD/watt.
    So, one approach would be to encourage people to return in the cooler months.  Folding on cooler days or evening hours when power might be cheaper too.
  • cine_chriscine_chris Powder Springs, GA ✭✭✭
    @kmiller922 The expectation of a 24hr/365 commitment is daunting to many people and the cost of participation a burden. Some ideas to ease the Folding commitment, encourage participation: 1) Fold-athon Weekend, once-per-month, the team maxes out heir effort for a weekend, challenging people to tag on Friday,Monday. Good way to invite others to participate and recruit for the team. 2) Sprints, similar in concept to Fold-athon. A day, a week, self-challenge, challenge friends, other teams. 3) Kiosks at local MC stores. My local store doesn`t have anything that I`m aware of... 4) Fall recruiting strategy, cooler temps, more powerful gear with higher efficiency will be arriving.
  • I don't know if any kind of contest would motivate me to do anything more than I'm already doing. Which is not much, as after two months my points totals are a little over two million and my work units completed a little over 400. My off-the-shelf machine is simply not that powerful. I'm basically just donating spare cycles when I'm not seriously using it (running the folder at full power drags down anything involving rapid screen updates). What I find more motivating is seeing actual results coming out of all these contributions people are making. How many work units does it take to complete an average project? 'Cause I have a feeling that I'm seeing the same project numbers that I saw at the start. Hundreds of millions of work units completed and apparently not done yet? That's discouraging.

  • KiritoKirito ✭✭
    @kmiller922 The expectation of a 24hr/365 commitment is daunting to many people and the cost of participation a burden.
    I know, living in New England, that winter will be harder for me as I will have to pay for gas and electricity.  Don't need the heat running in the summer and I don't run air conditioning, just fans, though my computer room is 5 degrees warmer than the rest of the house.  That said, helping people get GPU's would go a long way to helping. 
    Kirito

  • MightyMayfieldMightyMayfield Mayfield Heights, OH ✭✭✭
    I don't know if any kind of contest would motivate me to do anything more than I'm already doing. Which is not much, as after two months my points totals are a little over two million and my work units completed a little over 400. My off-the-shelf machine is simply not that powerful. I'm basically just donating spare cycles when I'm not seriously using it (running the folder at full power drags down anything involving rapid screen updates). What I find more motivating is seeing actual results coming out of all these contributions people are making. How many work units does it take to complete an average project? 'Cause I have a feeling that I'm seeing the same project numbers that I saw at the start. Hundreds of millions of work units completed and apparently not done yet? That's discouraging.
    Hi teamtempest, the amount of WU in a project can vary greatly depending on the type of research it is. I would advise looking at the scientific papers published by the FAH team if you would like to see what ends up of the computational power! Some of them are quite interesting reads. They can take months to be peer reviewed though, so many of the covid papers are going to be quite some time out still. Those can be seen at Stanford's page before public release.
  • MightyMayfieldMightyMayfield Mayfield Heights, OH ✭✭✭
    edited June 2020

    I time traveled and double posted don't mind me.
  • kmiller922kmiller922 Glenside, Pa ✭✭✭
    @cine_chris just to point one thing out I did refer to my local MC store not to everyone else.
  • kmiller922kmiller922 Glenside, Pa ✭✭✭

    I time traveled and double posted don't mind me.
    stop going around the sun at max warp in the Enterprise lol
  • MightyMayfieldMightyMayfield Mayfield Heights, OH ✭✭✭
    edited June 2020
    Wow, triple post and a day later. Ouch.
  • KiritoKirito ✭✭

    I time traveled and double posted don't mind me.
    stop going around the sun at max warp in the Enterprise lol

    I suspect it has to do more with parallel processing on an overclocked brain.
    You are seeing the results of data mirroring (or data bleeding) across processors.
    We know Mayfield has been running super hot lately to put the numbers up.
    He needs to go take a swim in a cooling pond.  Followed by a couple of Corona's.
    (The beer, not the virus).
    Kirito
  • cine_chriscine_chris Powder Springs, GA ✭✭✭
    @kmiller922
    Of the Top50 Folding teams MicroCenter was 7th for member retention.  Thinking it might be interesting to see how other teams retain folks.
  • Awesome!  Good job cine_chris B)

Leave a Comment

BoldItalicStrikethroughOrdered listUnordered list
Emoji
Image
Align leftAlign centerAlign rightToggle HTML viewToggle full pageToggle lights
Drop image/file