Large increase in Hash rate variability / p2pool hash
-
I noticed that on the P2pool cloud.
There is one new US pool that threw in up to 270 MHash but is down to zero now.
in P2pool this can hapen, if two different p2pool clouds connect together, but I have never seen it, when just one pool was new to the cloud.The hashrate increased around 8 AM MEZ, had it’s maximum between 9 AM and 12AM and is decreasing now. The ramp down shown, is a bit misleading, as the pool which had up to 270 MHash is down to 0 Mhash now. It simply takes some time for the pool stats to react and calculate the correct pool hashrate.
-
Just seen this fix come through Ppcoin :
flying delorean transaction exploit fix (zero cost mempool memory exhaustion exploit)
You can view, comment on, or merge this pull request online at:https://github.com/ppcoin/ppcoin/pull/104
Commit SummaryUpdate main.cpp
File Changes
M src/main.cpp (3)
Patch Links:
https://github.com/ppcoin/ppcoin/pull/104.patch https://github.com/ppcoin/ppcoin/pull/104.diff
—
Reply to this email directly or view it on GitHub.From cbaaba33182689fa988b722de3ed41823e754b92 Mon Sep 17 00:00:00 2001
From: John Connor john-connor@users.noreply.github.com
Date: Tue, 8 Dec 2015 08:40:56 -0500
Subject: [PATCH] Update main.cppflying delorean transaction exploit fix
src/main.cpp | 3 +++
1 file changed, 3 insertions(+)diff --git a/src/main.cpp b/src/main.cpp
index 1050bce…2466cfe 100644
— a/src/main.cpp
+++ b/src/main.cpp
@@ -469,6 +469,9 @@ bool CTransaction::CheckTransaction() const
return DoS(10, error(“CTransaction::CheckTransaction() : vin empty”));
if (vout.empty())
return DoS(10, error(“CTransaction::CheckTransaction() : vout empty”));- // Time (prevent mempool memory exhaustion attack)
- if (nTime > GetAdjustedTime() + nMaxClockDrift)
-
// Size limitsreturn DoS(10, error("CTransaction::CheckTransaction() : timestamp is too far into the future"));
if (::GetSerializeSize(*this, SER_NETWORK, PROTOCOL_VERSION) > MAX_BLOCK_SIZE)
return DoS(100, error(“CTransaction::CheckTransaction() : size limits failed”));
-
We have something like this since a long time already:
from feathercoin main.cpp:
// Check timestamp if (block.GetBlockTime() > GetAdjustedTime() + 2 * 60 * 60) return state.Invalid(error("CheckBlockHeader() : block timestamp too far in the future"), REJECT_INVALID, "time-too-new");
So this can’t be the reason for the current hash rate variations.
What we could change is the allowed time window, which is currently 2 times the block rate or 2 minutes, but we need to give some room for clock drifts and can’t go below 1 minute anyway.
Nevertheless 2 minutes is a quite large window, as I assume that all mining systems use NTP and the clock drift should be counted in milliseconds rather than seconds.
On the other hand due to network delay the propagation of a new block should be delayed only and the only danger we would have is, that the time stamp could move to the past, what is normal.
May be @ghostlander or @lizhi can give their opinion.
-
cheers for giving it a look at, I thought so, I looked at that when we did eHRC.
I noticed there have been a few questions about how to mine neoscrypt / x11 on BitcoinTalk.I just copied the sgminer linux guide, as they were being directed to x11. They may have ended up doing algo switching…
-
P2pool hashrate is back to normal
May be it’s not related, but my feathercoind on the pool server was hanging just now (~ 5:50 PM MEZ)
If more coins move to Neoscrypt, it is good for the algorithm. It becomes more popular.
-
There’s currently a major DDoS on UK University Janet system, so there might be some repercussions of that sort of stuff.
It looked like some “farms” or private miners , as opposed to coins… -
It’s definitively a p2pool node in Canada, which comes in incredible high hash rates, but only for a short time, some hours only.
Then it’s hash rate goes down to 0.
Nevertheless the pools share difficulty is driven high by this node and it takes time to get down to normal share difficulties.
To be clear, I talk about pool share difficulties only. The block chain difficulty is calculated from the total hash rate, including solo miners and non- p2pool pools, so this is not influenced that hard. -
I agree, seems to be p2pool switching starting up.
I can see the advantage of the shorter share period of p2pool, which I was critical of, will now come into play. Looks like they need to push it back up once a day? as it had rolled down past 0.0036 from 0.0052 last night, the total hash has increased back up. -
It’s relatively easy to build a coin switching pool with p2pool as long as you switch between Neoscrypt coins.
But I assume, that somebody builds a small private p2pool cloud, mines at low share difficulty and then connects to the public pool cloud. Then the shares mined in privacy are added to the pools hash rate and payed as any other shares submitted.
I also noticed, that the pool in question is rebooted every day or even more often, what fits to my theory. -
Just noticed, that the block chain hash rate also fluctuates a lot.
BC difficulty at 2015-12-12 07:09:32 or block 999936 was 1.668
BC difficulty at 2015-12-12 07:55:56 or block 999994 is 5.125So we definitively have some switching pools here. :disappointed:
Also the number of transactions is increasing, what is a good sign.
-
Our problem might be success … i.e. FTC has become a “most profitable” coin …
-
@Wellenreiter said:
We have something like this since a long time already:
from feathercoin main.cpp:
// Check timestamp if (block.GetBlockTime() > GetAdjustedTime() + 2 * 60 * 60) return state.Invalid(error("CheckBlockHeader() : block timestamp too far in the future"), REJECT_INVALID, "time-too-new");
So this can’t be the reason for the current hash rate variations.
What we could change is the allowed time window, which is currently 2 times the block rate or 2 minutes, but we need to give some room for clock drifts and can’t go below 1 minute anyway.
Nevertheless 2 minutes is a quite large window, as I assume that all mining systems use NTP and the clock drift should be counted in milliseconds rather than seconds.
On the other hand due to network delay the propagation of a new block should be delayed only and the only danger we would have is, that the time stamp could move to the past, what is normal.
May be @ghostlander or @lizhi can give their opinion.
I agree with you, this was disused before and I was for tightening this. I see no reason to be > 1 times. If it did fail it would fail safer and more secure.
if (block.GetBlockTime() > GetAdjustedTime() + 60 * 60)
It may be worth a couple of patches, there is also a review and sharpening the parameters of eHCR (reduce short block average to 13 blocks and extend the long period, I discussed with Bush on previous data), removing the difficulty damping. Based on the current charts and trend of multi algo switching. prob cause HF though. :(
-
Some signs the increase in p2pool global pool rate is a genuine GPU miners connecting in China, which might explain how so big, so quick. Not dismissing p2pool switching as a thing though.
-
cant wait to get my 20 remaining gpus back up and running, I will probably just park them mining ftc while they offset the cost of heating for the winter
-
Additional 20 GPU will generate a spike in hashrate ;)
-
I’ve been monitoring LTC p2pool for an issue on github. I’ve been able to upload a couple of charts, which are showing some interesting developments as the global LTC p2pool hash rate has gone down by 50% overnight …
-
Here’s the results of Litecoin p2pool share difficulty monitoring.
-
The trends look quite ok for me :)
-
Unfortunately, it is all Scrypt ASICs, so they can’t switch to mining FTC. ;)
So they must have “pool” switched to get a big change in p2pool hash.