1 2017-06-30 01:08:45 0|BlueMatt|cfields: I mean mine's like 3 LOC long, but, really, I dont care....I have nothing in 15 which is in any way related to that, and only a 16 stretch goal which needs a fix there
2 2017-06-30 01:09:09 0|cfields|BlueMatt: ok. I'm actually finishing it up now
3 2017-06-30 01:09:24 0|cfields|it's really big, but very worth it imo. It'll just be a big one to review
4 2017-06-30 01:09:26 0|BlueMatt|cfields: I'm down for a touch-everything sharedptr change for 16 here
5 2017-06-30 01:09:31 0|BlueMatt|yea, I think its worth it
6 2017-06-30 01:09:38 0|BlueMatt|I mean isnt it mostly mechanical?
7 2017-06-30 01:09:57 0|cfields|nah, there's lots to it :\
8 2017-06-30 01:10:05 0|cfields|well, the shared-ptr part is pretty mechanical
9 2017-06-30 01:10:15 0|BlueMatt|hmm, ok, will reserve judgement for now, but the idea sounds good
10 2017-06-30 01:10:21 0|cfields|but then you still have the refcounting on top
11 2017-06-30 01:10:26 0|BlueMatt|and I'm totally for a big overhaul of that shit for 16
12 2017-06-30 01:10:40 0|cfields|and once you get rid of that, tons of LOCK(cs_vNodes) go away
13 2017-06-30 01:10:42 0|BlueMatt|why bother refcounting on top? just relay on ~CNode and use the shared_ptr refcounting for it?
14 2017-06-30 01:11:02 0|cfields|have to control the thread it deletes from
15 2017-06-30 01:11:19 0|BlueMatt|ah
16 2017-06-30 01:11:20 0|BlueMatt|ok
17 2017-06-30 01:11:42 0|cfields|anyway, it's nice and tidy imo. Very straigtforward. Just large.
18 2017-06-30 01:11:47 0|BlueMatt|alright, well will reserve judgement, sounds reasonably in principle, then
19 2017-06-30 01:11:53 0|BlueMatt|yea, large is ok...for 16
20 2017-06-30 01:12:01 0|BlueMatt|gonna sit for 1.5 months first, then, though :/
21 2017-06-30 01:12:52 0|cfields|i don't quite understand that arguement. We have a feature freeze for a reason. That's when stuff bakes. It's not a code freeze.
22 2017-06-30 01:13:52 0|BlueMatt|hmm? I mean i doubt this would land pre-15, and if it doesnt then we're all gonna be focused on getting lots of testing in on 15 and likely wont review/merge all that much before the final gets shipped
23 2017-06-30 01:14:36 0|cfields|sure, if there's no time, that's one thing. But I don't see any need for stuff to bake for a while before feature freeze just to let it bake
24 2017-06-30 01:15:22 0|BlueMatt|ohoh, you mean maybe this will land for 15? I mean id be surprised...lots to clean up and try to land before then that is probably higher prio, but, ok, fair :)
25 2017-06-30 01:15:38 0|BlueMatt|higher prio in the next 3 weeks with folks in the us on vacation for the next week, even :/
26 2017-06-30 01:16:39 0|cfields|well i'd like to do some benches before i push, but i'm thinking that cs_vNodes is blocking a significant percentage of the time
27 2017-06-30 01:17:10 0|BlueMatt|oh? alright, well may be worth pushing, then...still, two weeks to merge is an incredibly tight schedule
28 2017-06-30 01:17:17 0|cfields|and i have other stuff that relies on this (I was just putting it off until last). But yea, I agree that it's not really hight priority. If it doesn't make it, it doesn't make it.
29 2017-06-30 01:17:34 0|cfields|*high
30 2017-06-30 01:17:54 0|cfields|I'm mainly just happy that I finally worked out an approach I like. It'll wear off in a few hours :)
31 2017-06-30 01:18:08 0|BlueMatt|pr quickly, then
32 2017-06-30 01:18:16 0|BlueMatt|and dont close it this time when it wears off :p
33 2017-06-30 01:18:34 0|cfields|hehe
34 2017-06-30 01:19:13 0|cfields|ok, back to coding/testing. I haven't actually run the thing yet. Time to see how badly it crashes/burns :)
35 2017-06-30 01:19:28 0|BlueMatt|lol, sg
36 2017-06-30 07:11:25 0|wumpus|whoa, validation really became a lot quicker with recent changes
37 2017-06-30 07:21:53 0|sipa|wumpus: IBD or at tip
38 2017-06-30 07:21:55 0|sipa|?
39 2017-06-30 07:22:12 0|wumpus|at tip
40 2017-06-30 07:22:21 0|wumpus|(a node catching up a few days)
41 2017-06-30 07:24:58 0|sipa|ah, i mean while in full sync processing newly mined blocks
42 2017-06-30 07:25:11 0|wumpus|it used to bring this particular system to a halt while slowly syncing blocks, now the blocks breeze by
43 2017-06-30 07:25:40 0|sipa|if you have slow I/O, pertxout likely helps a lot
44 2017-06-30 07:25:52 0|sipa|because it causes far fewer flushes
45 2017-06-30 07:26:10 0|wumpus|definitely slow i/o
46 2017-06-30 07:26:57 0|luke-jr|I need to kill my btrfs..
47 2017-06-30 07:28:19 0|wumpus|but that's great, a lot of users, esp windows users, tend to have slow i/o laptops
48 2017-06-30 07:28:47 0|wumpus|so that's a decent thing to optimize for
49 2017-06-30 07:29:05 0|wumpus|luke-jr: why?
50 2017-06-30 07:29:42 0|luke-jr|wumpus: slow I/O. very slow. :/
51 2017-06-30 07:30:02 0|luke-jr|even Chromium gets slow on btrfs home :/
52 2017-06-30 07:30:28 0|wumpus|which reminds me I should probably do some window testing again pre-0.15 *sigh*
53 2017-06-30 07:31:03 0|wumpus|luke-jr: so btrfs is better in everything except performance?
54 2017-06-30 07:31:32 0|luke-jr|wumpus: and reliability, I guess. had some kernel panics in btrfs code a few times several months ago
55 2017-06-30 07:31:48 0|luke-jr|and someone in #btrfs told me to add a mount option to avoid a very rare data corruption bug
56 2017-06-30 07:32:31 0|luke-jr|oh, and ioctls don't work in 32-bit mode, but I use 64-bit now
57 2017-06-30 07:32:35 0|wumpus|I've never tried btrfs yet, just sticking with extN on linux, I guess I'm extremely conservative with regard to file systems
58 2017-06-30 07:33:00 0|sipa|i tend to use ext4 as root fs, and zfs for all the rest
59 2017-06-30 07:33:15 0|sipa|(where "all the rest" is not much)
60 2017-06-30 07:33:43 0|luke-jr|hm, it probably doesn't help I/O that I have btrfs compression enabled
61 2017-06-30 07:34:11 0|luke-jr|although you'd think modern CPUs would be fast enough it didn't matter
62 2017-06-30 07:34:29 0|wumpus|sipa: doesn't zfs on linux require custom patches?
63 2017-06-30 07:35:13 0|sipa|not on ubuntu at least
64 2017-06-30 07:35:16 0|luke-jr|O.o
65 2017-06-30 07:36:34 0|wumpus|luke-jr: for encryption that's mostly true, for compression (especially when writing) I'd expect a reasonable perf loss with compression, for reading you might get a perf gain in some cases (but only if you store files that are well-compressible on a slow medium)
66 2017-06-30 07:37:46 0|luke-jr|tempting to hack btrfs to notice when files get modified often and not compress those.. but I don't really want to hack on my filesystem, so not happening â˺
67 2017-06-30 07:37:49 0|wumpus|also, even if throughput higher, latency might be worse because of the extra decompression step
68 2017-06-30 07:38:34 0|luke-jr|oh, ZFS is the filesystem that's GPL-infringing XD
69 2017-06-30 07:39:37 0|luke-jr|prob just go back to ext4
70 2017-06-30 07:41:40 0|wumpus|disk compression seems a waste of time in the 201x's, most large files (video, images, music) are already compressed so I'd dare say it helps very little in practice
71 2017-06-30 07:42:40 0|wumpus|even if you store raw video/music then generic gzip compression is not very suited
72 2017-06-30 07:43:48 0|luke-jr|maybe I should try just turning it off before giving up on it
73 2017-06-30 07:44:29 0|wumpus|thinking, something it might help with is compiler object files, those tend to be well-compressible
74 2017-06-30 07:44:35 0|wumpus|and large (in case of c++)
75 2017-06-30 07:46:00 0|luke-jr|(supposedly it's faster)
76 2017-06-30 07:46:44 0|sipa|wumpus: i use lzo compression for the blockchain
77 2017-06-30 07:47:05 0|sipa|lzo is much faster
78 2017-06-30 07:52:38 0|gmaxwell|wumpus: until recently the gap between disk IO speed and the rest of the system was making it so that compression (at least fast compression like LZO) actually increased performance...
79 2017-06-30 07:54:32 0|luke-jr|recently = SSD?
80 2017-06-30 07:55:47 0|luke-jr|I also made the mistake of moving my home to 5400 RPM.. >_<
81 2017-06-30 08:21:33 0|gmaxwell|not just SSD but newer pcie ssds.
82 2017-06-30 09:06:18 0|wumpus|gmaxwell: also for writing?
83 2017-06-30 09:07:35 0|wumpus|I guess it depends on the kind of data
84 2017-06-30 09:09:22 0|wumpus|for reading performance it can be quite obviously true
85 2017-06-30 09:12:28 0|wumpus|sipa: custom patch, or at the file system level?
86 2017-06-30 11:18:32 0|bitcoin-git|[13bitcoin] 15Mirobit opened pull request #10710: REST/RPC example update (06master...06docupt) 02https://github.com/bitcoin/bitcoin/pull/10710
87 2017-06-30 13:46:44 0|morcos|instagibbs: I'm trying to review #10333 and I was trying to understand the ReturnKey() behavior. How come it is not a bug (also in master) that we call ReturnKey() even in the case where CoinControl gave us a destination address (so we never reserved a new key)
88 2017-06-30 13:46:45 0|gribble|https://github.com/bitcoin/bitcoin/issues/10333 | [wallet] fee fixes: always create change, adjust value, and pââ¬Â¦ by instagibbs ÷ Pull Request #10333 ÷ bitcoin/bitcoin ÷ GitHub
89 2017-06-30 13:48:37 0|instagibbs|calling return is a nop if you haven't marked a key as reserved IIRC
90 2017-06-30 13:50:58 0|instagibbs|oh i see what you mean, lemme continue reading...
91 2017-06-30 13:51:05 0|morcos|instagibbs: i just figured it out
92 2017-06-30 13:51:13 0|morcos|i didn't know how reservekey's work
93 2017-06-30 13:51:29 0|morcos|but yeah the reservekey object has nIndex -1 until you ask it to actually get a reserved key
94 2017-06-30 13:51:35 0|instagibbs|yep
95 2017-06-30 13:51:36 0|morcos|so calling return on it is a no-op
96 2017-06-30 13:51:49 0|instagibbs|so yeah i think the logic is right
97 2017-06-30 13:52:07 0|morcos|while i have you, aren't you missing a continue at the bottom of your loop in the subtractFeeFromAmount > 0 case
98 2017-06-30 13:52:11 0|instagibbs|it's safe to call returnkey unless you actively are saving a usage
99 2017-06-30 13:52:35 0|instagibbs|checking
100 2017-06-30 13:52:36 0|morcos|instagibbs: yes agreed on the returnkey
101 2017-06-30 13:52:46 0|morcos|after line 2378
102 2017-06-30 13:54:44 0|instagibbs|wallet.cpp? That's in SelectCoins on my end
103 2017-06-30 13:55:04 0|bitcoin-git|[13bitcoin] 15jnewbery opened pull request #10711: [tests] Introduce TestNode (06master...06testnode2) 02https://github.com/bitcoin/bitcoin/pull/10711
104 2017-06-30 13:56:00 0|instagibbs|it's just going to loop, right?
105 2017-06-30 13:56:42 0|instagibbs|if subtractFeeFromAmount != 0 and you don't have enough fee, it just hits the end and loops
106 2017-06-30 13:58:16 0|morcos|but it'll sign first?
107 2017-06-30 13:59:00 0|morcos|also you seem to have a regression in the subtractFeeFromAmount > 0 case where you have isDust change. Previously that change was getting deleted and added to fees, now I think you're not doing that
108 2017-06-30 13:59:46 0|instagibbs|ah yeah i wrote this previous to that behavior, will fix
109 2017-06-30 14:00:00 0|instagibbs|not sure what you mean about signing first, I'm probably looking at the wrong line
110 2017-06-30 14:00:35 0|morcos|oh maybe i am, i was looking without whitespace, hold on
111 2017-06-30 14:03:12 0|morcos|yeah my fault on that, seems fine
112 2017-06-30 14:06:14 0|morcos|We ought to be able to avoid looping completely when subtractFeeFromAmount > 0 though...
113 2017-06-30 14:07:45 0|morcos|If we changed the logic when subtractFeeFromAmount > 0 to first check for dust change, and if that exists move it to fee, then reduce the recipient amounts as necessary to get to the fee. (unless the recipient amounts aren't big enough, which is a failure anyway)
114 2017-06-30 14:08:12 0|instagibbs|I mean it can still be considered at the end in the "Fee adjustment" stage right?
115 2017-06-30 14:09:05 0|morcos|I think it would be better to eliminate dust change to fee first, b/c then you can just subtract less from the recipients (only in the case where subtractFeeFroAmount > 0)
116 2017-06-30 14:09:40 0|instagibbs|mm right
117 2017-06-30 14:11:26 0|morcos|If you have a minute, indulge me for a second while we walk through exactly what scenario 10333 is trying to solve
118 2017-06-30 14:12:02 0|instagibbs|so it's the same situation as the previous fix you made, where it grabs a lot of inputs, fails, then next time overpays in fees. Your fix adds fees to existing change output
119 2017-06-30 14:12:20 0|instagibbs|This is to make it also cover the "no change output" case
120 2017-06-30 14:12:55 0|instagibbs|meaning we calculate what the change would look like every time, adjust as necessary
121 2017-06-30 14:13:02 0|morcos|In the no change output case though, doesn't that by definition mean we've tried via the knapsack algorithm to get inputs that add up as closely as possible to the desired target
122 2017-06-30 14:13:26 0|instagibbs|for that crazy fee level, yes
123 2017-06-30 14:13:59 0|instagibbs|nFeeRet in pathological case can be many times higher than required
124 2017-06-30 14:14:55 0|instagibbs|change is only calculated via valuein-valuetoselect, nFeeRet is part of the latter term
125 2017-06-30 14:16:46 0|morcos|Yeah maybe it's not made worse by your PR, but it seems like the algorithm that tries to find an exact match is always going to try to find an exact match assuming a change ouput, and then therefore will always overpay fees by the feerate*1 extra output in the case where we do find a match which doesn't require change
126 2017-06-30 14:17:19 0|morcos|but if that amount is > dust, which i think it always is, then we'll always revert to the second step of trying to find MIN_CHANGE?
127 2017-06-30 14:17:37 0|morcos|i don't know, i guess it just seems like in some ways its changing the coin selection algorithm
128 2017-06-30 14:18:04 0|morcos|maybe not for the worse, but its a weird way to change it
129 2017-06-30 14:18:20 0|instagibbs|the real fix is to do effective value selection
130 2017-06-30 14:18:49 0|morcos|Yes, that combined with not trying to find an EXACT match, but trying to find a match within some tolerance that you are willing to lose to extra fees
131 2017-06-30 14:18:58 0|morcos|EXACT match finding is stupid
132 2017-06-30 14:19:35 0|Murch|morcos: BranchAndBound assumes no change output.
133 2017-06-30 14:19:38 0|morcos|that tolerance already kidn of happens via the remove dust output to fees , but we could make it explicit and do better
134 2017-06-30 14:20:04 0|instagibbs|Yeah imo the weakness of 10333 is that it's only needed to avoid stupidity, and will likely poorly interact with real fixes
135 2017-06-30 14:20:28 0|instagibbs|right now we're just so bad at estimating, we almost never exact match
136 2017-06-30 14:20:45 0|morcos|Murch: yeah i need to review that, but I think it shoudl be aware of the amount its wiling to discard in excess fees, and return success if it finds a result under that tolerance, and if not, assume a change output and aim for TARGET_CHANGE (probably just MIN_CHANGE)
137 2017-06-30 14:20:46 0|Murch|instagibbs: If it is only a minuscule amount missing for the fees, wouldn't it perhaps be acceptable to take that from the change output?
138 2017-06-30 14:20:56 0|instagibbs|Murch, it already does
139 2017-06-30 14:21:40 0|instagibbs|it's the case of it not having a fee output to make larger in which is currently just SFYL
140 2017-06-30 14:21:45 0|morcos|instagibbs: what would you think about another approach, that perhaps changed the logic less
141 2017-06-30 14:21:50 0|Murch|morcos: It allows for excess up to the cost of creating a change output. In my original proposal also the cost of creating an input (spending the output later), but with the high fee rate volatility, maybe only the change is a better measure.
142 2017-06-30 14:21:54 0|morcos|i'm wary of unintended consequences...
143 2017-06-30 14:21:59 0|instagibbs|morcos, very open to it, if you have the idea
144 2017-06-30 14:22:22 0|morcos|but suppose in the no-change case if we overpay the fee by too much, we just be willing to loop again (to some limit so its not unbounded)
145 2017-06-30 14:22:51 0|Murch|morcos: BranchAndBound is only for the "no-change" case. If you're going to create a change output anyway, I don't understand why you're going to work so hard to minimize it to an arbitrary number. ;)
146 2017-06-30 14:23:12 0|morcos|Murch: ok, good that makes sense. Yes. Agreed with that too!
147 2017-06-30 14:23:17 0|instagibbs|morcos, maybe keep the caching logic, and just loop?
148 2017-06-30 14:23:26 0|instagibbs|with some anti-exhaustion param?
149 2017-06-30 14:23:28 0|Murch|I mean, you don't want to have all of your money in transit, but besides that, why is 0.01 BTC a great size for output?
150 2017-06-30 14:23:30 0|morcos|actually the caching logic isn't somehting i love
151 2017-06-30 14:23:38 0|morcos|it just seems to had logistical complexity
152 2017-06-30 14:23:57 0|morcos|s/had/add/
153 2017-06-30 14:24:19 0|instagibbs|so any looping change will have to guarantee efficient convergence to a valid tx
154 2017-06-30 14:26:14 0|morcos|instagibbs: well in the dumbest implementation... just have a bool, triedAgain, and if nFeeRet < nFeeNeeded OR (nFeeRet > nFeeNeeded + too_much_extra AND !tried_again) then you loop again
155 2017-06-30 14:26:36 0|Murch|morcos: BranchAndBound does an exhaustive search, so looping again will yield no other results. We should just do a stricter limit if we feel that we're overpaying.
156 2017-06-30 14:26:51 0|morcos|Murch: we're talking about for 0.15 before BAB
157 2017-06-30 14:27:08 0|Murch|ah, I'm sorry.
158 2017-06-30 14:27:30 0|instagibbs|morcos, heh, that's basically what I tried earlier
159 2017-06-30 14:27:33 0|morcos|instagibbs: do we have a sense for how rare these overpayments are? my gut is they are extremely rare, and just being willing to discard it one time will make them all but disappear
160 2017-06-30 14:27:36 0|instagibbs|well, with more complex logic on top
161 2017-06-30 14:27:41 0|morcos|instagibbs: and what happened?
162 2017-06-30 14:28:14 0|morcos|sorry btw, for engaging on this so late, was too caught up in the fee estimate stuff (so many edge cases to get right there too, but probably this should have been more important)
163 2017-06-30 14:28:29 0|instagibbs|appreciate the feedback
164 2017-06-30 14:30:13 0|instagibbs|so to happen twice you'd basically have to oscillate twice, and get exact matches twice
165 2017-06-30 14:30:20 0|Murch|Here's an idea: How about we just raise the target for the knapsack a little, and use that adjustment to buffer missing fees but remain over MIN_CHANGE?
166 2017-06-30 14:31:36 0|morcos|Murch: we allow reducing MIN_CHANGE to MIN_CHANGE/2 to achieve that affect, its only the case where we found an exact match but used fewer inputs that we have a problem
167 2017-06-30 14:32:14 0|Murch|ah okay
168 2017-06-30 14:32:27 0|Murch|sorry, I'm late to the conversation, maybe I'm not helping :-I
169 2017-06-30 14:34:06 0|morcos|instagibbs: ok here is an even more obviously correct idea i think
170 2017-06-30 14:34:27 0|instagibbs|yeah I'm not quite convincing myself on previous idea
171 2017-06-30 14:34:49 0|instagibbs|I'd like to say "no worse"
172 2017-06-30 14:35:25 0|morcos|what if we put an additional loop around the section that starts at: const CAmount nChange = nValueIn - nValueToSelect;
173 2017-06-30 14:36:12 0|morcos|where we only repeat that section if nFeeRet - nFeeNeeded is too high.. and if it was, then we change nValueToSelect to reflect the new fee, but we don't redo the coinselction
174 2017-06-30 14:36:36 0|morcos|This will result in just calculating that now we do have positive change, and creating the change output for us as expected
175 2017-06-30 14:37:56 0|morcos|I like avoiding the unintended consequences in either of the other two approaches of redoing coin selection if some criteria happens
176 2017-06-30 14:38:14 0|instagibbs|as long as that redefinition doesn't do anything weird, sounds reasonable
177 2017-06-30 14:38:23 0|morcos|In this case we're only running coin selection exactly as many times as we currently do, we're just adding a change output if we didn't have one and the fee is way too high
178 2017-06-30 14:38:36 0|morcos|you don't have to reuse the same variable
179 2017-06-30 14:38:51 0|morcos|structure that little loop to make sense
180 2017-06-30 14:39:15 0|morcos|also i'm happy to write up that idea if you want to take a look
181 2017-06-30 14:43:54 0|instagibbs|let me take a look and give it a shot
182 2017-06-30 14:49:45 0|morcos|ok, i already started, i'll shoot you a link in a few mins, but i'm happy to let you do in your PR, i just wanted to see if it would work out easily. it might still have issues
183 2017-06-30 14:52:33 0|instagibbs|actually go ahead and work on it, I've got to wrap up work stuff before 4 day weekend
184 2017-06-30 14:52:43 0|instagibbs|thanks
185 2017-06-30 14:57:34 0|morcos|heh, damn... there are few details that need to be worked out that i was hoping you'd do.
186 2017-06-30 14:57:51 0|morcos|instagibbs: anywhere here is what i started. https://github.com/morcos/bitcoin/commit/d180deabc9a6cfd94814546c48931dfb06eacc3b?w=1
187 2017-06-30 14:58:13 0|instagibbs|hah, just dont tell gmaxwell I'm working on it
188 2017-06-30 14:58:23 0|instagibbs|looking
189 2017-06-30 14:59:58 0|morcos|if thats right , i think we just need to smartly determine the threshold, and we need to convince ourselves or add logic so it can't get stuck in some infinite loop, i don't knwo if the pick_new_inputs bool needs to be reset or its too confusing to just be confident it'll complete next time or what
190 2017-06-30 15:11:08 0|instagibbs|smells right, leaning towards "no need to reset" but will need to think more
191 2017-06-30 15:12:24 0|morcos|i don't think reseting could hurt... just in an else to if (pick_new_inputs) you reset it to true... but not sure
192 2017-06-30 15:12:46 0|morcos|also not sure what to do about the threshold, but i think that problem is no different than 10333, it's just more explicit
193 2017-06-30 15:13:21 0|instagibbs|sure, I think reset would be more "default" behavior, easier to review
194 2017-06-30 15:14:13 0|morcos|I think something simplish like GetMinimumFee(Approx size of change output) + Min_Change_size (Dust Threshold of change output) is about right
195 2017-06-30 15:14:18 0|instagibbs|re:threshhold, I think you should also be adding the current nChange in the calc
196 2017-06-30 15:14:38 0|morcos|it's maybe a bit hacky, but could be cleaned up once we have effective value machinery in the future
197 2017-06-30 15:14:44 0|morcos|huh?
198 2017-06-30 15:14:48 0|morcos|there is no current nChange
199 2017-06-30 15:15:00 0|morcos|that section is only it if changeouput position == -1
200 2017-06-30 15:15:05 0|morcos|only hit
201 2017-06-30 15:15:35 0|morcos|anyway, afk for a few
202 2017-06-30 15:15:43 0|instagibbs|right, but that is going to fees, and you may not need to after
203 2017-06-30 15:15:46 0|instagibbs|later
204 2017-06-30 15:15:54 0|morcos|wait what
205 2017-06-30 15:16:01 0|instagibbs|ill annotate on github...
206 2017-06-30 15:16:03 0|morcos|ok
207 2017-06-30 15:17:07 0|instagibbs|nevermind, was wrong
208 2017-06-30 15:43:37 0|sipa|wumpus: zfs
209 2017-06-30 17:30:16 0|bitcoin-git|[13bitcoin] 15morcos opened pull request #10712: Add change output if necessary to reduce excess fee (06master...06addchangehwenoverpaying) 02https://github.com/bitcoin/bitcoin/pull/10712
210 2017-06-30 18:24:56 0|bitcoin-git|[13bitcoin] 15jnewbery closed pull request #10015: Wallet should reject long chains by default (06master...06walletrejectlongchains) 02https://github.com/bitcoin/bitcoin/pull/10015
211 2017-06-30 18:27:16 0|luke-jr|wumpus: is your current key on bitcoin.org? https://bitcoin.org/laanwj-releases.asc
212 2017-06-30 18:27:17 0|bitcoin-git|[13bitcoin] 15jnewbery closed pull request #9943: Make script.py wildcard importable. (06master...06rpctestsprimitives) 02https://github.com/bitcoin/bitcoin/pull/9943
213 2017-06-30 18:31:28 0|wumpus|luke-jr: that's the release signing key, not my personal key, that's laanwj.asc
214 2017-06-30 18:31:34 0|wumpus|but both should be up to date
215 2017-06-30 18:32:44 0|luke-jr|wumpus: hmm, someone's saying it doesn't match the UASF sigs
216 2017-06-30 18:33:00 0|wumpus|the release key certainly won't
217 2017-06-30 18:33:16 0|wumpus|I only use that for sha256sums.asc, which wasn't provided for uasf sigs
218 2017-06-30 18:33:19 0|luke-jr|ah
219 2017-06-30 18:33:35 0|luke-jr|so he should use the laanwj.asc?
220 2017-06-30 18:33:38 0|wumpus|yes
221 2017-06-30 18:34:40 0|wumpus|0x1E4AED62986CD25D is the only subkey I use for signing
222 2017-06-30 19:07:27 0|wumpus|strange, I can't get one node to sync from another with master, which works with 0.14
223 2017-06-30 19:08:02 0|wumpus|(the node I'm syncing from is not up-to-date, but does that matter? it's sending the 'headers', just seems to get ignored)
224 2017-06-30 19:11:38 0|sipa|i believe nodes don't respond to headers question until they're in sync
225 2017-06-30 19:11:49 0|wumpus|just trying to do a benchmark, why does this have to be so difficut every time :(
226 2017-06-30 19:12:15 0|wumpus|yes, I already patched the client node for this, it responds to all getheaders
227 2017-06-30 19:12:31 0|wumpus|I'm sure the headers is being sent, the 0.14 branch node synced fine from it
228 2017-06-30 19:13:31 0|sipa|so what is the setup exactly?
229 2017-06-30 19:13:51 0|sipa|you have a not-fully synced node A, and a new node B, connected to A?
230 2017-06-30 19:14:08 0|wumpus|https://github.com/bitcoin/bitcoin/issues/9923 is the problem you're talking about, but I know about that one :)
231 2017-06-30 19:14:17 0|wumpus|yes
232 2017-06-30 19:15:02 0|wumpus|A only listens, has a part of the block chain, B only connects, and should sync from A so that they end up with the same blocks
233 2017-06-30 19:15:13 0|wumpus|(and most important, same utxo database)
234 2017-06-30 19:21:37 0|wumpus|A has blocks up to 430241
235 2017-06-30 19:22:49 0|sipa|A sends getheaders, B responds with headers?
236 2017-06-30 19:23:09 0|gmaxwell|if you're too far back you'll stay stuck in IBD. In particular you have to at least meet the minimum chain work to get out of IBD.
237 2017-06-30 19:23:09 0|sipa|eh, other way around
238 2017-06-30 19:23:22 0|wumpus|2017-06-30 19:22:11 getheaders 430241 to end from peer=5
239 2017-06-30 19:23:22 0|wumpus|2017-06-30 19:22:11 received: getheaders (997 bytes) peer=5
240 2017-06-30 19:23:27 0|gmaxwell|and I believe 430241 will not.
241 2017-06-30 19:23:28 0|wumpus|2017-06-30 19:22:11 sending headers (82 bytes) peer=5
242 2017-06-30 19:23:31 0|wumpus|(that's on A)
243 2017-06-30 19:23:48 0|wumpus|B gets a headers message with one entry: 2017-06-30 19:22:11 ProcessNewBlockHeaders: 000000000000000003f1410b194facc9092a2b76e99db8653ec1a32edfd2ab29
244 2017-06-30 19:23:49 0|sipa|that's a remarkably small headers packet
245 2017-06-30 19:23:59 0|gmaxwell|if you make IsInitialBlockDownload return true, you'll be good to go; my bet.
246 2017-06-30 19:24:07 0|wumpus|(that's the blockhash for block 430241 )
247 2017-06-30 19:24:09 0|gmaxwell|er return false.
248 2017-06-30 19:24:30 0|wumpus|on A or B?
249 2017-06-30 19:24:45 0|sipa|so it looks like B already has all the headers?
250 2017-06-30 19:24:56 0|sipa|(up to 430241)
251 2017-06-30 19:24:59 0|gmaxwell|on the thing with 430241 blocks (I believe that is A?)
252 2017-06-30 19:25:13 0|wumpus|sipa: seems so! I can delete B's state and retry if that helps
253 2017-06-30 19:25:22 0|sipa|wumpus: what does getblockchaininfo on B say?
254 2017-06-30 19:25:30 0|gmaxwell|(though if B has the headers this sounds like it might be something else!)
255 2017-06-30 19:25:47 0|sipa|or even getchaintips
256 2017-06-30 19:25:59 0|wumpus|"headers": 430241,
257 2017-06-30 19:26:31 0|wumpus|so it has all the headers, but only blocks up to the genesis block
258 2017-06-30 19:27:05 0|wumpus|so the problem is in B which is not requesting any blocks
259 2017-06-30 19:27:58 0|sipa|so while B is in IBD, there are some restrinction on where it downloads from
260 2017-06-30 19:28:01 0|wumpus|gmaxwell: yes I already patched out the code on A that would make it refuse to send headers when in IBD, I struggled with that one too many times to forget
261 2017-06-30 19:28:05 0|gmaxwell|or it did but A didn't reply? (I assume you have debug=net and so you can tell?)
262 2017-06-30 19:28:15 0|sipa|B only has one connection?
263 2017-06-30 19:28:28 0|sipa|wumpus: getpeerinfo on B does not list any blocks in flight?
264 2017-06-30 19:28:55 0|wumpus|sipa: yes, B can get only one connection, and nothing inflight
265 2017-06-30 19:29:23 0|wumpus|gmaxwell: no getblocks
266 2017-06-30 19:30:01 0|sipa|does getchaintips say anything about invalid chains?
267 2017-06-30 19:30:02 0|gmaxwell|sweet, sounds like a bug.
268 2017-06-30 19:32:11 0|wumpus|getchaintips for A and B (guess B is the relevant one) https://0bin.net/paste/6Kqmm+1VMAhkOTJy#T9erOX4pcLnHu0uW++cugcWEkwDSFdKPlrefkP1nL86
269 2017-06-30 19:32:13 0|sipa|to debug (if the behaviour remains after a restart of B, perhaps add LogPrintf's to all the return statements in FindNextBlocksToDownload on B
270 2017-06-30 19:32:31 0|sipa|there are many cases in which it decides not to return anything
271 2017-06-30 19:32:53 0|sipa|i'd be helpful to know if a) it is being invoked and b) if yes, why it does not return anything
272 2017-06-30 19:33:46 0|sipa|getchaintips looks totally normal
273 2017-06-30 19:33:48 0|gmaxwell|or just attach gdb breakpoint at FindNextBlocksToDownload and restart node A and step through it?
274 2017-06-30 19:33:53 0|wumpus|trying with a C (fresh B) first, to see if it getting stuck after the headers is reproducible
275 2017-06-30 19:34:45 0|wumpus|yep, same problem second time
276 2017-06-30 19:34:56 0|wumpus|received all the headers but making no block requests
277 2017-06-30 19:36:59 0|sipa|wumpus: i'll give you a patch to debug
278 2017-06-30 19:37:35 0|sipa|ah, i think i found it
279 2017-06-30 19:37:56 0|sipa|if (state->pindexBestKnownBlock->nChainWork < UintToArith256(consensusParams.nMinimumChainWork)) { // This peer has nothing interesting
280 2017-06-30 19:38:02 0|gmaxwell|ah!
281 2017-06-30 19:38:23 0|wumpus|"This peer has nothing interesting."
282 2017-06-30 19:38:34 0|wumpus|yes, just narrowed it down to there
283 2017-06-30 19:38:49 0|wumpus|0.14.2 thinks the peer does have something interesting
284 2017-06-30 19:38:57 0|gmaxwell|yea, thats it. e2652002b6011f793185d473f87f1730c625593b added that.
285 2017-06-30 19:39:08 0|gmaxwell|
286 2017-06-30 19:39:08 0|gmaxwell|--
287 2017-06-30 19:39:08 0|gmaxwell|Delay parallel block download until chain has sufficient work
288 2017-06-30 19:39:08 0|gmaxwell|nMinimumChainWork is an anti-DoS threshold; wait until we have a proposed
289 2017-06-30 19:39:11 0|gmaxwell|tip with more work than that before downloading blocks towards that tip.
290 2017-06-30 19:39:14 0|gmaxwell|--
291 2017-06-30 19:39:28 0|wumpus|just going to delete that code for now, see if it works then
292 2017-06-30 19:39:50 0|gmaxwell|basically it impedes a dos attack where I make a long fork of low difficulty blocks with an asic miner and run you out of diskspace fetching those blocks.
293 2017-06-30 19:40:11 0|wumpus|can we please have a mode for benchmarking that disables all these annoying checks :)
294 2017-06-30 19:42:15 0|gmaxwell|wumpus: IIRC sdaftuar wanted a hidden commandline option to let you set the nMinimumChainWork to another value.
295 2017-06-30 19:43:12 0|gribble|https://github.com/bitcoin/bitcoin/issues/9923 | Whitelisting outgoing connections ÷ Issue #9923 ÷ bitcoin/bitcoin ÷ GitHub
296 2017-06-30 19:43:12 0|wumpus|now there's this one, as well as #9923, every time it becomes more difficult to sync nodes to each other in synthethic circumstances
297 2017-06-30 19:43:12 0|wumpus|ok
298 2017-06-30 19:43:15 0|sipa|#10357
299 2017-06-30 19:43:16 0|gribble|https://github.com/bitcoin/bitcoin/issues/10357 | Allow setting nMinimumChainWork on command line by sdaftuar ÷ Pull Request #10357 ÷ bitcoin/bitcoin ÷ GitHub
300 2017-06-30 19:43:56 0|gmaxwell|hurray I remembered who was doing what for once.
301 2017-06-30 19:44:07 0|wumpus|lol apparently that was a bad idea, now it segfaults
302 2017-06-30 19:44:29 0|gmaxwell|you can't remove the null check. :P
303 2017-06-30 19:44:51 0|gmaxwell|git revert e2652002b6011f793185d473f87f1730c625593b
304 2017-06-30 19:45:01 0|wumpus|478 state->pindexLastCommonBlock = chainActive[std::min(state->pindexBestKnownBlock->nHeight, chainActive.Height())];
305 2017-06-30 19:46:05 0|wumpus|right
306 2017-06-30 19:47:48 0|wumpus|that did it, it's syncing
307 2017-06-30 19:48:15 0|wumpus|thanks!