From notdatoneguy at gmail.com Wed Aug 9 18:49:59 2017 From: notdatoneguy at gmail.com (Colin Lacina) Date: Wed, 9 Aug 2017 13:49:59 -0500 Subject: [bitcoin-dev] Structure for Trustless Hybrid Bitcoin Wallets Using P2SH for Recovery Options In-Reply-To: References: Message-ID: I believe I have come up with a structure that allows for trustless use of hybrid wallets that would allow for someone to use a hybrid wallet without having to trust it while still allowing for emergency recovery of funds in the case of a lost wallet. It would run off of this TX script: IF 1 2 CHECKMULTISIGVERIFY ELSE 2 2 CHECKMULTISIG ENDIF A typical transaction using this would involve a user signing a TX with their userWalletPrivKey, authenticating with the server, possibly with 2FA using a phone or something like Authy or Google Authenticator. After authentication, the server signs with their serverWalletPrivKey. In case the server goes rogue and starts refusing to sign, the user can use their userRecoveryPrivKey to send the funds anywhere they choose. Because if this, the userRecoveryPrivKey is best suited to cold wallet storage. In the more likely event that the user forgets their password and/or looses access to their userWalletPrivKey as well as loses their recovery key, they rely on the serverRecoveryPrivKey. When the user first sets up their wallet, they answer some basic identity information, set up a recovery password, and/or set up recovery questions and answers. This information is explicitly NOT sent to serve with the exception of recovery questions (although the answers remain with the user, never seeing the server). What is sent to the server is it's 256 bit hash used to identify the recovery wallet. The server then creates a 1025 bit nonce, encrypts it, stores it, and transmits it to the user's client. Meanwhile, the user's wallet client generates the serverRecoveryPrivKey. Once the client has both the serverRecoveryPrivKey, and the nonce, it uses SHA512 on the combination of the identity questions and answers, the recovery password (if used), the recovery questions and answers, and the nonce. It uses the resulting hash to encrypt the serverRecoveryPrivKey. Finally, the already encrypted key is encrypted again for transmission to the server. The server decrypts it, then rencrypts it for long term storage. When the user needs to resort to using this option, they 256 bit hash their information to build their recovery identifier. The server may, optionally, request e-mail and or SMS confirmation that user is actually attempting the recovery. Next, the server decrypts the saved nonce, as well as the first layer of encryption on the serverRecoveryPrivKey, then encrypts both for transmission to the user's client. Then the client removes the transmission encryption, calculates the 512 bit hash that was used to originally encrypt the serverRecoveryPrivKey by using the provided information and the nonce. After all of that the user can decrypt the airbitzServerRecoveryPrivKey and use it to send a transaction anywhere they choose. I was thinking this may make a good informational BIP but would like feedback. -------------- next part -------------- An HTML attachment was scrubbed... URL: From dev at jonasschnelli.ch Wed Aug 9 19:35:26 2017 From: dev at jonasschnelli.ch (Jonas Schnelli) Date: Wed, 9 Aug 2017 21:35:26 +0200 Subject: [bitcoin-dev] Structure for Trustless Hybrid Bitcoin Wallets Using P2SH for Recovery Options In-Reply-To: References: Message-ID: <5C198808-A3BB-413D-A793-0107095EFBE9@jonasschnelli.ch> Hi Colin > In case the server goes rogue and starts refusing to sign, the user can use their userRecoveryPrivKey to send the funds anywhere they choose. Because if this, the userRecoveryPrivKey is best suited to cold wallet storage. Would you then assume that userWalletPubKey is a hot key (stored on the users computer eventually in a browser based local storage container)? In case of an attack on the server responsible for serverWalletPubKey (where also the personal information of the user are stored [including the xpub == amount of funds hold by the user)), wound?t this increase the users risk of being an possible target (False sense of multisig security, comparing to cold storage / HWW keys)? > In the more likely event that the user forgets their password and/or looses access to their userWalletPrivKey as well as loses their recovery key, they rely on the serverRecoveryPrivKey. > > When the user first sets up their wallet, they answer some basic identity information, set up a recovery password, and/or set up recovery questions and answers. This information is explicitly NOT sent to serve with the exception of recovery questions (although the answers remain with the user, never seeing the server). What is sent to the server is it's 256 bit hash used to identify the recovery wallet. The server then creates a 1025 bit nonce, encrypts it, stores it, and transmits it to the user's client. I guess this will result in protecting the funds stored in this transaction entirely on the users identity information and eventually the optional recovery password, though I guess you are adding additional security by protecting via the server nonce from brute-forcing. Why 1025bit for the nonce? Why SHA512 instead of SHA256 (I guess you need 256bit symmetric key material for the key encryption)? Considered using a (H)KDF for deriving the symmetric key (even if the server based nonce reduces the possibility of brute-forcing)? Your modal has probably the TORS (trust on recovery setup) weakness (compared to a HWW where you [should] be protected on compromised systems during private key creation). -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: Message signed with OpenPGP URL: From nickodell at gmail.com Wed Aug 9 20:14:18 2017 From: nickodell at gmail.com (Nick ODell) Date: Wed, 9 Aug 2017 14:14:18 -0600 Subject: [bitcoin-dev] Structure for Trustless Hybrid Bitcoin Wallets Using P2SH for Recovery Options In-Reply-To: References: Message-ID: Colin, 1) This is a good start for a BIP, but it's missing details. For example, the nonce is encrypted by the server. What key is it encrypted with? Clarifying ambiguities like this can sometimes reveal weaknesses that you wouldn't otherwise think of. 2) What kind of recovery questions are asked? If it's something like "What was the name of your first pet?" then what prevents the server from stealing the wallet by trying a dictionary of the most common pet names? Is there a mitigation to this, besides picking cryptographically secure identifiers for my pets? --Nick On Wed, Aug 9, 2017 at 12:49 PM, Colin Lacina via bitcoin-dev < bitcoin-dev at lists.linuxfoundation.org> wrote: > I believe I have come up with a structure that allows for trustless use of > hybrid wallets that would allow for someone to use a hybrid wallet without > having to trust it while still allowing for emergency recovery of funds in > the case of a lost wallet. It would run off of this TX script: > > IF > 1 2 CHECKMULTISIGVERIFY > ELSE > 2 2 CHECKMULTISIG > ENDIF > > A typical transaction using this would involve a user signing a TX with > their userWalletPrivKey, authenticating with the server, possibly with 2FA > using a phone or something like Authy or Google Authenticator. After > authentication, the server signs with their serverWalletPrivKey. > > In case the server goes rogue and starts refusing to sign, the user can > use their userRecoveryPrivKey to send the funds anywhere they choose. > Because if this, the userRecoveryPrivKey is best suited to cold wallet > storage. > > In the more likely event that the user forgets their password and/or > looses access to their userWalletPrivKey as well as loses their recovery > key, they rely on the serverRecoveryPrivKey. > > When the user first sets up their wallet, they answer some basic identity > information, set up a recovery password, and/or set up recovery questions > and answers. This information is explicitly NOT sent to serve with the > exception of recovery questions (although the answers remain with the user, > never seeing the server). What is sent to the server is it's 256 bit hash > used to identify the recovery wallet. The server then creates a 1025 bit > nonce, encrypts it, stores it, and transmits it to the user's client. > > Meanwhile, the user's wallet client generates the serverRecoveryPrivKey. > > Once the client has both the serverRecoveryPrivKey, and the nonce, it uses > SHA512 on the combination of the identity questions and answers, the > recovery password (if used), the recovery questions and answers, and the > nonce. It uses the resulting hash to encrypt the serverRecoveryPrivKey. > > Finally, the already encrypted key is encrypted again for transmission to > the server. The server decrypts it, then rencrypts it for long term storage. > > When the user needs to resort to using this option, they 256 bit hash > their information to build their recovery identifier. The server may, > optionally, request e-mail and or SMS confirmation that user is actually > attempting the recovery. > > Next, the server decrypts the saved nonce, as well as the first layer of > encryption on the serverRecoveryPrivKey, then encrypts both for > transmission to the user's client. Then the client removes the transmission > encryption, calculates the 512 bit hash that was used to originally encrypt > the serverRecoveryPrivKey by using the provided information and the nonce. > > After all of that the user can decrypt the airbitzServerRecoveryPrivKey > and use it to send a transaction anywhere they choose. > > I was thinking this may make a good informational BIP but would like > feedback. > > _______________________________________________ > bitcoin-dev mailing list > bitcoin-dev at lists.linuxfoundation.org > https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From eth3rs at gmail.com Fri Aug 11 20:36:59 2017 From: eth3rs at gmail.com (Ethan Heilman) Date: Fri, 11 Aug 2017 16:36:59 -0400 Subject: [bitcoin-dev] ScalingBitcoin 2017: Stanford - Call For Proposals Now Open Message-ID: Dear All, The Call for Proposals (CFP) for 'Scaling Bitcoin 2017: Stanford' is now open. Please see https://scalingbitcoin.org for details *Important Dates* Sept 25th - Deadline for submissions to the CFP Oct 16th - Applicant acceptance notification Hope to see you in California (Nov 4-5 2017) Full CFP can be found at https://scalingbitcoin.org/event/stanford2017#cfp -------------- next part -------------- An HTML attachment was scrubbed... URL: From erik at q32.com Sun Aug 13 18:46:37 2017 From: erik at q32.com (Erik Aronesty) Date: Sun, 13 Aug 2017 14:46:37 -0400 Subject: [bitcoin-dev] Would anyone object to adding a dlopen message hook system? Message-ID: I was thinking about something like this that could add the ability for module extensions in the core client. When messages are received, modules hooks are called with the message data. They can then handle, mark the peer invalid, push a message to the peer or pass through an alternate command. Also, modules could have their own private commands prefixed by "x:" or something like that. The idea is that the base P2P layer is left undisturbed, but there is now a way to create "enhanced features" that some peers support. My end goal is to support using lightning network micropayments to allow people to pay for better node access - creating a market for node services. But I don't think this should be "baked in" to core. Nor do I think it should be a "patch". It should be a linked-in module, optionally compiled and added to bitcoin conf, then loaded via dlopen(). Modules should be slightly robust to Bitcoin versions changing out from under them, but not if the network layer is changed. This can be ensured by a) keeping a module version number, and b) treating module responses as if they were just received from the network. Any module incompatibility should throw an exception...ensuring broken peers don't stay online. In general I think the core reference would benefit from the ability to create subnetworks within the Bitcoin ecosystem. Right now, we have two choices... full node and get slammed with traffic, or listen-only node, and do nothing. Adding a module/hook system would allow a complex ecosystem of participation - and it would seem to be far more robust in the long term. Something like this??? class MessageHookIn { public: int hookversion; int64_t nodeid; int nVersion; int64_t serviceflags; const char *strCommand; const char *nodeaddr; const char *vRecv; int vRecvLen; int64_t nTimeReceived; }; class MessageHookOut { public: int hookversion; int misbehaving; const char *logMsg; const char *pushCommand; const unsigned char *pushData; int pushDataLen; const char *passCommand; CDataStream passStream; }; class MessageHook { public: int hookversion; std::string name; typedef bool (*HandlerType)(const MessageHookIn *in, MessageHookOut *out); HandlerType handle; }; -------------- next part -------------- An HTML attachment was scrubbed... URL: From dev at jonasschnelli.ch Sun Aug 13 20:00:41 2017 From: dev at jonasschnelli.ch (Jonas Schnelli) Date: Sun, 13 Aug 2017 22:00:41 +0200 Subject: [bitcoin-dev] Would anyone object to adding a dlopen message hook system? In-Reply-To: References: Message-ID: <45BBAF76-CDB7-4D57-920C-70887CABFF48@jonasschnelli.ch> Hi Erik Thanks for your proposal. In general, modularisation is a good thing, though proposing core to add modules wie dlopen() seems the wrong direction. Core already has the problem of running to many things in the same process. The consensus logic, p2p system as well as the wallet AND the GUI do all share the same process (!). A module approach like you describe would be a security nightmare (and Core is currently in the process of separating out the wallet and the GUI into its own process). What does speak against using the existing IPC interfaces like RPC/ZMQ? RPC can be bidirectional using long poll. /jonas > I was thinking about something like this that could add the ability for module extensions in the core client. > > When messages are received, modules hooks are called with the message data. > > They can then handle, mark the peer invalid, push a message to the peer or pass through an alternate command. Also, modules could have their own private commands prefixed by "x:" or something like that. > > The idea is that the base P2P layer is left undisturbed, but there is now a way to create "enhanced features" that some peers support. > > My end goal is to support using lightning network micropayments to allow people to pay for better node access - creating a market for node services. > > But I don't think this should be "baked in" to core. Nor do I think it should be a "patch". It should be a linked-in module, optionally compiled and added to bitcoin conf, then loaded via dlopen(). Modules should be slightly robust to Bitcoin versions changing out from under them, but not if the network layer is changed. This can be ensured by a) keeping a module version number, and b) treating module responses as if they were just received from the network. Any module incompatibility should throw an exception...ensuring broken peers don't stay online. -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: Message signed with OpenPGP URL: From mark at friedenbach.org Sun Aug 13 20:56:39 2017 From: mark at friedenbach.org (Mark Friedenbach) Date: Sun, 13 Aug 2017 13:56:39 -0700 Subject: [bitcoin-dev] Would anyone object to adding a dlopen message hook system? In-Reply-To: <45BBAF76-CDB7-4D57-920C-70887CABFF48@jonasschnelli.ch> References: <45BBAF76-CDB7-4D57-920C-70887CABFF48@jonasschnelli.ch> Message-ID: Jonas, I think his proposal is to enable extending the P2P layer, e.g. adding new message types. Are you suggesting having externalized message processing? That could be done via RPC/ZMQ while opening up a much more narrow attack surface than dlopen, although I imagine such an interface would require a very complex API specification. On Sun, Aug 13, 2017 at 1:00 PM, Jonas Schnelli via bitcoin-dev wrote: > Hi Erik > > Thanks for your proposal. > In general, modularisation is a good thing, though proposing core to add modules wie dlopen() seems the wrong direction. > Core already has the problem of running to many things in the same process. The consensus logic, p2p system as well as the wallet AND the GUI do all share the same process (!). > > A module approach like you describe would be a security nightmare (and Core is currently in the process of separating out the wallet and the GUI into its own process). > > What does speak against using the existing IPC interfaces like RPC/ZMQ? > RPC can be bidirectional using long poll. > > /jonas > >> I was thinking about something like this that could add the ability for module extensions in the core client. >> >> When messages are received, modules hooks are called with the message data. >> >> They can then handle, mark the peer invalid, push a message to the peer or pass through an alternate command. Also, modules could have their own private commands prefixed by "x:" or something like that. >> >> The idea is that the base P2P layer is left undisturbed, but there is now a way to create "enhanced features" that some peers support. >> >> My end goal is to support using lightning network micropayments to allow people to pay for better node access - creating a market for node services. >> >> But I don't think this should be "baked in" to core. Nor do I think it should be a "patch". It should be a linked-in module, optionally compiled and added to bitcoin conf, then loaded via dlopen(). Modules should be slightly robust to Bitcoin versions changing out from under them, but not if the network layer is changed. This can be ensured by a) keeping a module version number, and b) treating module responses as if they were just received from the network. Any module incompatibility should throw an exception...ensuring broken peers don't stay online. > > > _______________________________________________ > bitcoin-dev mailing list > bitcoin-dev at lists.linuxfoundation.org > https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev > From erik at q32.com Tue Aug 15 01:33:34 2017 From: erik at q32.com (Erik Aronesty) Date: Mon, 14 Aug 2017 21:33:34 -0400 Subject: [bitcoin-dev] Would anyone object to adding a dlopen message hook system? In-Reply-To: References: <45BBAF76-CDB7-4D57-920C-70887CABFF48@jonasschnelli.ch> Message-ID: Actually the more I think about it, the more I realize that all I need is to listen on a new port, and use the RPC api to affect Bitcoin: - ban a peer (# of hours) - unban a peer (# of hours) As long as I have those two functions, I can do everything I need. On Sun, Aug 13, 2017 at 4:56 PM, Mark Friedenbach wrote: > Jonas, I think his proposal is to enable extending the P2P layer, e.g. > adding new message types. Are you suggesting having externalized > message processing? That could be done via RPC/ZMQ while opening up a > much more narrow attack surface than dlopen, although I imagine such > an interface would require a very complex API specification. > > On Sun, Aug 13, 2017 at 1:00 PM, Jonas Schnelli via bitcoin-dev > wrote: > > Hi Erik > > > > Thanks for your proposal. > > In general, modularisation is a good thing, though proposing core to add > modules wie dlopen() seems the wrong direction. > > Core already has the problem of running to many things in the same > process. The consensus logic, p2p system as well as the wallet AND the GUI > do all share the same process (!). > > > > A module approach like you describe would be a security nightmare (and > Core is currently in the process of separating out the wallet and the GUI > into its own process). > > > > What does speak against using the existing IPC interfaces like RPC/ZMQ? > > RPC can be bidirectional using long poll. > > > > /jonas > > > >> I was thinking about something like this that could add the ability for > module extensions in the core client. > >> > >> When messages are received, modules hooks are called with the message > data. > >> > >> They can then handle, mark the peer invalid, push a message to the peer > or pass through an alternate command. Also, modules could have their own > private commands prefixed by "x:" or something like that. > >> > >> The idea is that the base P2P layer is left undisturbed, but there is > now a way to create "enhanced features" that some peers support. > >> > >> My end goal is to support using lightning network micropayments to > allow people to pay for better node access - creating a market for node > services. > >> > >> But I don't think this should be "baked in" to core. Nor do I think > it should be a "patch". It should be a linked-in module, optionally > compiled and added to bitcoin conf, then loaded via dlopen(). Modules > should be slightly robust to Bitcoin versions changing out from under them, > but not if the network layer is changed. This can be ensured by a) > keeping a module version number, and b) treating module responses as if > they were just received from the network. Any module incompatibility > should throw an exception...ensuring broken peers don't stay online. > > > > > > _______________________________________________ > > bitcoin-dev mailing list > > bitcoin-dev at lists.linuxfoundation.org > > https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From omarshib at gmail.com Mon Aug 14 06:05:35 2017 From: omarshib at gmail.com (omar shibli) Date: Mon, 14 Aug 2017 09:05:35 +0300 Subject: [bitcoin-dev] BIP proposal, Pay to Contract BIP43 Application Message-ID: Hey all, A lot of us familiar with the pay to contract protocol, and how it uses cleverly the homomorphic property of elliptic curve encryption system to achieve it. Unfortunately, there is no standard specification on how to conduct such transactions in the cyberspace. We have developed a basic trade finance application that relies on the original idea described in the Homomorphic Payment Addresses and the Pay-to-Contract Protocol paper, yet we have generalized it and made it BIP43 complaint. We would like to share our method, and get your feedback about it, hopefully this effort will result into a standard for the benefit of the community. Abstract idea: We define the following levels in BIP32 path. m / purpose' / coin_type' / contract_id' / * contract_id is is an arbitrary number within the valid range of indices. Then we define, contract base as following prefix: m / purpose' / coin_type' / contract_id' contract commitment address is computed as follows: hash document using cryptographic hash function of your choice (e.g. blake2) map hash to partial derivation path Convert hash to binary array. Partition the array into parts, each part length should be 16. Convert each part to integer in decimal format. Convert each integer to string. Join all strings with slash `/`. compute child public key by chaining the derivation path from step 2 with contract base: m// compute address Example: master private extended key: xprv9s21ZrQH143K2JF8RafpqtKiTbsbaxEeUaMnNHsm5o6wCW3z8ySyH4UxFVSfZ8n7ESu7fgir8imbZKLYVBxFPND1pniTZ81vKfd45EHKX73 coin type: 0 contract id: 7777777 contract base computation : derivation path: m/999'/0'/7777777' contract base public extended key: xpub6CMCS9rY5GKdkWWyoeXEbmJmxGgDcbihofyARxucufdw7k3oc1JNnniiD5H2HynKBwhaem4KnPTue6s9R2tcroqkHv7vpLFBgbKRDwM5WEE Contract content: foo Contract sha256 signature: 2c26b46b68ffc68ff99b453c1d30413413422d706483bfa0f98a5e886266e7ae Contract partial derivation path: 11302/46187/26879/50831/63899/17724/7472/16692/4930/11632/25731/49056/63882/24200/25190/59310 Contract commitment pub key path: m/999'/0'/7777777'/11302/46187/26879/50831/63899/17724/7472/16692/4930/11632/25731/49056/63882/24200/25190/59310 or /11302/46187/26879/50831/63899/17724/7472/16692/4930/11632/25731/49056/63882/24200/25190/59310 Contract commitment pub key: xpub6iQVNpbZxdf9QJC8mGmz7cd3Cswt2itcQofZbKmyka5jdvQKQCqYSDFj8KCmRm4GBvcQW8gaFmDGAfDyz887msEGqxb6Pz4YUdEH8gFuaiS Contract commitment address: 17yTyx1gXPPkEUN1Q6Tg3gPFTK4dhvmM5R You can find the full BIP draft in the following link: https://github.com/commerceblock/pay-to-contract-protocol-specification/blob/master/bip-draft.mediawiki Regards, Omar -------------- next part -------------- An HTML attachment was scrubbed... URL: From greg at xiph.org Tue Aug 15 05:12:11 2017 From: greg at xiph.org (Gregory Maxwell) Date: Tue, 15 Aug 2017 05:12:11 +0000 Subject: [bitcoin-dev] BIP proposal, Pay to Contract BIP43 Application In-Reply-To: References: Message-ID: This construction appears to me to be completely insecure. Say my pubkey (the result of the derivation path) is P. We agree to contract C1. A payment is made to P + G*H(C1). But in secret, I constructed contract C2 and pubkey Q and set P = Q + G*H(C2). Now I can take that payment (paid to Q + G*(C1) + G*H(C2)) and assert it was in act a payment to P' + G*H(C2). (P' is simply Q + G*H(C1)) I don't see anything in the proposal that addresses this. Am I missing it? The applications are also not clear to me, and it doesn't appear to address durability issues (how do you avoid losing your funds if you lose the exact contract?). On Mon, Aug 14, 2017 at 6:05 AM, omar shibli via bitcoin-dev wrote: > Hey all, > > A lot of us familiar with the pay to contract protocol, and how it uses > cleverly the homomorphic property of elliptic curve encryption system to > achieve it. > Unfortunately, there is no standard specification on how to conduct such > transactions in the cyberspace. > > We have developed a basic trade finance application that relies on the > original idea described in the Homomorphic Payment Addresses and the > Pay-to-Contract Protocol paper, yet we have generalized it and made it BIP43 > complaint. > > We would like to share our method, and get your feedback about it, hopefully > this effort will result into a standard for the benefit of the community. > > Abstract idea: > > We define the following levels in BIP32 path. > m / purpose' / coin_type' / contract_id' / * > > contract_id is is an arbitrary number within the valid range of indices. > > Then we define, contract base as following prefix: > m / purpose' / coin_type' / contract_id' > > contract commitment address is computed as follows: > hash document using cryptographic hash function of your choice (e.g. blake2) > map hash to partial derivation path > Convert hash to binary array. > Partition the array into parts, each part length should be 16. > Convert each part to integer in decimal format. > Convert each integer to string. > Join all strings with slash `/`. > compute child public key by chaining the derivation path from step 2 with > contract base: > m// > compute address > Example: > > master private extended key: > xprv9s21ZrQH143K2JF8RafpqtKiTbsbaxEeUaMnNHsm5o6wCW3z8ySyH4UxFVSfZ8n7ESu7fgir8imbZKLYVBxFPND1pniTZ81vKfd45EHKX73 > coin type: 0 > contract id: 7777777 > > contract base computation : > > derivation path: > m/999'/0'/7777777' > contract base public extended key: > xpub6CMCS9rY5GKdkWWyoeXEbmJmxGgDcbihofyARxucufdw7k3oc1JNnniiD5H2HynKBwhaem4KnPTue6s9R2tcroqkHv7vpLFBgbKRDwM5WEE > > Contract content: > foo > > Contract sha256 signature: > 2c26b46b68ffc68ff99b453c1d30413413422d706483bfa0f98a5e886266e7ae > > Contract partial derivation path: > 11302/46187/26879/50831/63899/17724/7472/16692/4930/11632/25731/49056/63882/24200/25190/59310 > > Contract commitment pub key path: > m/999'/0'/7777777'/11302/46187/26879/50831/63899/17724/7472/16692/4930/11632/25731/49056/63882/24200/25190/59310 > or > /11302/46187/26879/50831/63899/17724/7472/16692/4930/11632/25731/49056/63882/24200/25190/59310 > > Contract commitment pub key: > xpub6iQVNpbZxdf9QJC8mGmz7cd3Cswt2itcQofZbKmyka5jdvQKQCqYSDFj8KCmRm4GBvcQW8gaFmDGAfDyz887msEGqxb6Pz4YUdEH8gFuaiS > > Contract commitment address: > 17yTyx1gXPPkEUN1Q6Tg3gPFTK4dhvmM5R > > > You can find the full BIP draft in the following link: > https://github.com/commerceblock/pay-to-contract-protocol-specification/blob/master/bip-draft.mediawiki > > > Regards, > Omar > > _______________________________________________ > bitcoin-dev mailing list > bitcoin-dev at lists.linuxfoundation.org > https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev > From erik at q32.com Tue Aug 15 04:44:52 2017 From: erik at q32.com (Erik Aronesty) Date: Tue, 15 Aug 2017 00:44:52 -0400 Subject: [bitcoin-dev] Would anyone object to adding a dlopen message hook system? In-Reply-To: References: <45BBAF76-CDB7-4D57-920C-70887CABFF48@jonasschnelli.ch> Message-ID: The idea is that some peers, when you connect to them will work fine for some time, but you need to find out the rate for services and send a micropayment to maintain the connection. This creates an optional pay layer for high quality services, and also creates DDOS resistance in this fallback layer. -------------- next part -------------- An HTML attachment was scrubbed... URL: From omarshib at gmail.com Tue Aug 15 16:40:36 2017 From: omarshib at gmail.com (omar shibli) Date: Tue, 15 Aug 2017 19:40:36 +0300 Subject: [bitcoin-dev] BIP proposal, Pay to Contract BIP43 Application In-Reply-To: References: Message-ID: Thank you for your time Gregory, I really appreciate that. What we are describing here is a method to embed cryptographic signatures into a public key based on HD Wallets - BIP32. In a practical application, we should have two cryptographic signatures from both sides, I don't think in that case your scenario would be an issue. More specifically in our application, we do the following construction: contract base: m/200'/0'/' payment base (merchant commitment): contract_base/ payment address (customer commitment): contract_base// payment address funds could be reclaimed only if the customer_contract_signature is provided by the customer. In terms of durability, our app is pretty simple at this point, we don't store anything, we let customer download and manage the files. I will update the BIP to address your concerns. On Tue, Aug 15, 2017 at 8:12 AM, Gregory Maxwell wrote: > This construction appears to me to be completely insecure. > > > Say my pubkey (the result of the derivation path) is P. > > We agree to contract C1. A payment is made to P + G*H(C1). > > But in secret, I constructed contract C2 and pubkey Q and set P = Q + > G*H(C2). > > Now I can take that payment (paid to Q + G*(C1) + G*H(C2)) and assert > it was in act a payment to P' + G*H(C2). (P' is simply Q + G*H(C1)) > > I don't see anything in the proposal that addresses this. Am I missing it? > > The applications are also not clear to me, and it doesn't appear to > address durability issues (how do you avoid losing your funds if you > lose the exact contract?). > > > > > On Mon, Aug 14, 2017 at 6:05 AM, omar shibli via bitcoin-dev > wrote: > > Hey all, > > > > A lot of us familiar with the pay to contract protocol, and how it uses > > cleverly the homomorphic property of elliptic curve encryption system to > > achieve it. > > Unfortunately, there is no standard specification on how to conduct such > > transactions in the cyberspace. > > > > We have developed a basic trade finance application that relies on the > > original idea described in the Homomorphic Payment Addresses and the > > Pay-to-Contract Protocol paper, yet we have generalized it and made it > BIP43 > > complaint. > > > > We would like to share our method, and get your feedback about it, > hopefully > > this effort will result into a standard for the benefit of the community. > > > > Abstract idea: > > > > We define the following levels in BIP32 path. > > m / purpose' / coin_type' / contract_id' / * > > > > contract_id is is an arbitrary number within the valid range of indices. > > > > Then we define, contract base as following prefix: > > m / purpose' / coin_type' / contract_id' > > > > contract commitment address is computed as follows: > > hash document using cryptographic hash function of your choice (e.g. > blake2) > > map hash to partial derivation path > > Convert hash to binary array. > > Partition the array into parts, each part length should be 16. > > Convert each part to integer in decimal format. > > Convert each integer to string. > > Join all strings with slash `/`. > > compute child public key by chaining the derivation path from step 2 with > > contract base: > > m// > > compute address > > Example: > > > > master private extended key: > > xprv9s21ZrQH143K2JF8RafpqtKiTbsbaxEeUaMnNHsm5o6wCW3z8ySyH4Ux > FVSfZ8n7ESu7fgir8imbZKLYVBxFPND1pniTZ81vKfd45EHKX73 > > coin type: 0 > > contract id: 7777777 > > > > contract base computation : > > > > derivation path: > > m/999'/0'/7777777' > > contract base public extended key: > > xpub6CMCS9rY5GKdkWWyoeXEbmJmxGgDcbihofyARxucufdw7k3oc1JNnnii > D5H2HynKBwhaem4KnPTue6s9R2tcroqkHv7vpLFBgbKRDwM5WEE > > > > Contract content: > > foo > > > > Contract sha256 signature: > > 2c26b46b68ffc68ff99b453c1d30413413422d706483bfa0f98a5e886266e7ae > > > > Contract partial derivation path: > > 11302/46187/26879/50831/63899/17724/7472/16692/4930/11632/ > 25731/49056/63882/24200/25190/59310 > > > > Contract commitment pub key path: > > m/999'/0'/7777777'/11302/46187/26879/50831/63899/17724/ > 7472/16692/4930/11632/25731/49056/63882/24200/25190/59310 > > or > > /11302/46187/26879/50831/ > 63899/17724/7472/16692/4930/11632/25731/49056/63882/24200/25190/59310 > > > > Contract commitment pub key: > > xpub6iQVNpbZxdf9QJC8mGmz7cd3Cswt2itcQofZbKmyka5jdvQKQCqYSDFj > 8KCmRm4GBvcQW8gaFmDGAfDyz887msEGqxb6Pz4YUdEH8gFuaiS > > > > Contract commitment address: > > 17yTyx1gXPPkEUN1Q6Tg3gPFTK4dhvmM5R > > > > > > You can find the full BIP draft in the following link: > > https://github.com/commerceblock/pay-to-contract- > protocol-specification/blob/master/bip-draft.mediawiki > > > > > > Regards, > > Omar > > > > _______________________________________________ > > bitcoin-dev mailing list > > bitcoin-dev at lists.linuxfoundation.org > > https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From alex.mutovkin at gmail.com Wed Aug 16 16:20:45 2017 From: alex.mutovkin at gmail.com (=?UTF-8?B?0JDQu9C10LrRgdC10Lkg0JzRg9GC0L7QstC60LjQvQ==?=) Date: Wed, 16 Aug 2017 19:20:45 +0300 Subject: [bitcoin-dev] Fwd: Proposal of a new BIP : annual splitting blockchain database to reduce its size. In-Reply-To: References: Message-ID: Let me describe the possible improvement of the bitcoin blockchain database (BBD) size in general terms. We can implement new routine : annual split of the BBD. Reason is that 140gb full wallet unconvinience. BBD splits in two parts : 1) old blocks before the date of split and 2) new blocks, starting from first technical block with all rolled totals on the date of split. (also possible transfer of tiny totals due to their unprofitability to the miners, so we cut long tail of tiny holders) 3) old blocks packs into annual megablocks and stores in the side archive chain for some needs for FBI investigations or other goals. Thanks for your attention, Alexey Mutovkin -------------- next part -------------- An HTML attachment was scrubbed... URL: From nickodell at gmail.com Wed Aug 16 16:52:01 2017 From: nickodell at gmail.com (Nick ODell) Date: Wed, 16 Aug 2017 10:52:01 -0600 Subject: [bitcoin-dev] Fwd: Proposal of a new BIP : annual splitting blockchain database to reduce its size. In-Reply-To: References: Message-ID: What makes this approach better than the prune option of Bitcoin? On Wed, Aug 16, 2017 at 10:20 AM, ??????? ???????? via bitcoin-dev < bitcoin-dev at lists.linuxfoundation.org> wrote: > > Let me describe the possible improvement of the bitcoin blockchain > database (BBD) size in general terms. > > We can implement new routine : annual split of the BBD. Reason is that > 140gb full wallet unconvinience. > > BBD splits in two parts : > 1) old blocks before the date of split and > 2) new blocks, starting from first technical block with all rolled totals > on the date of split. > (also possible transfer of tiny totals due to their unprofitability to > the miners, so we cut long tail of tiny holders) > 3) old blocks packs into annual megablocks and stores in the side archive > chain for some needs for FBI investigations or other goals. > > > Thanks for your attention, > > Alexey Mutovkin > > > > > > > > > > > > > > _______________________________________________ > bitcoin-dev mailing list > bitcoin-dev at lists.linuxfoundation.org > https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From alex.mutovkin at gmail.com Wed Aug 16 17:37:34 2017 From: alex.mutovkin at gmail.com (=?UTF-8?B?0JDQu9C10LrRgdC10Lkg0JzRg9GC0L7QstC60LjQvQ==?=) Date: Wed, 16 Aug 2017 20:37:34 +0300 Subject: [bitcoin-dev] Fwd: Proposal of a new BIP : annual splitting blockchain database to reduce its size. In-Reply-To: References: Message-ID: I read about prune option right now, actually i didn't hear about it before. Yes this option can save some disk space but afaik first (awful N-days lasting) synchronization still requires to download full database. My approach also cuts database and replaces all old blocks (except say last 6 blocks for security reason) with series of blocks with rolled initial totals and optionally purged from tiny wallets crap (storing on six thousand current nodes and on the swarm of full wallets information that John have 100 satosi is too expensive for us and we may annually clear that balance as fee for miners or just delete). So almost all nodes can hold only the rolled database (i can't estimate compression ration of the rolled database now, i am not advanced user as you can see). And only much less amount of archive nodes holds full expanded database. 2017-08-16 19:52 GMT+03:00 Nick ODell : > What makes this approach better than the prune option of Bitcoin? > > On Wed, Aug 16, 2017 at 10:20 AM, ??????? ???????? via bitcoin-dev < > bitcoin-dev at lists.linuxfoundation.org> wrote: > >> >> Let me describe the possible improvement of the bitcoin blockchain >> database (BBD) size in general terms. >> >> We can implement new routine : annual split of the BBD. Reason is that >> 140gb full wallet unconvinience. >> >> BBD splits in two parts : >> 1) old blocks before the date of split and >> 2) new blocks, starting from first technical block with all rolled totals >> on the date of split. >> (also possible transfer of tiny totals due to their unprofitability >> to the miners, so we cut long tail of tiny holders) >> 3) old blocks packs into annual megablocks and stores in the side archive >> chain for some needs for FBI investigations or other goals. >> >> >> Thanks for your attention, >> >> Alexey Mutovkin >> >> >> >> >> >> >> >> >> >> >> >> >> >> _______________________________________________ >> bitcoin-dev mailing list >> bitcoin-dev at lists.linuxfoundation.org >> https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev >> >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From luke at dashjr.org Wed Aug 16 18:33:47 2017 From: luke at dashjr.org (Luke Dashjr) Date: Wed, 16 Aug 2017 18:33:47 +0000 Subject: [bitcoin-dev] Fwd: Proposal of a new BIP : annual splitting blockchain database to reduce its size. In-Reply-To: References: Message-ID: <201708161833.48897.luke@dashjr.org> To have a BIP, you need to explain not only *why* you want to do something, but also *what specifically* to do, and *how* to do it. This concept (historically known as "flip the chain" and/or "UTXO commitments") is not new, merely complicated to design and implement. Luke On Wednesday 16 August 2017 4:20:45 PM ??????? ???????? via bitcoin-dev wrote: > Let me describe the possible improvement of the bitcoin blockchain database > (BBD) size in general terms. > > We can implement new routine : annual split of the BBD. Reason is that > 140gb full wallet unconvinience. > > BBD splits in two parts : > 1) old blocks before the date of split and > 2) new blocks, starting from first technical block with all rolled totals > on the date of split. > (also possible transfer of tiny totals due to their unprofitability to > the miners, so we cut long tail of tiny holders) > 3) old blocks packs into annual megablocks and stores in the side archive > chain for some needs for FBI investigations or other goals. > > > Thanks for your attention, > > Alexey Mutovkin From kanzure at gmail.com Thu Aug 17 11:31:30 2017 From: kanzure at gmail.com (Bryan Bishop) Date: Thu, 17 Aug 2017 06:31:30 -0500 Subject: [bitcoin-dev] Fwd: [Lightning-dev] Lightning in the setting of blockchain hardforks In-Reply-To: References: Message-ID: ---------- Forwarded message ---------- From: Christian Decker Date: Thu, Aug 17, 2017 at 5:39 AM Subject: Re: [Lightning-dev] Lightning in the setting of blockchain hardforks To: Martin Schwarz , lightning-dev at lists.linuxfoundation.org Hi Martin, this is the perfect venue to discuss this, welcome to the mailing list :-) Like you I think that using the first forked block as the forkchain's genesis block is the way to go, keeping the non-forked blockchain on the original genesis hash, to avoid disruption. It may become more difficult in the case one chain doesn't declare itself to be the forked chain. Even more interesting are channels that are open during the fork. In these cases we open a single channel, and will have to settle two. If no replay protection was implemented on the fork, then we can use the last commitment to close the channel (updates should be avoided since they now double any intended effect), if replay protection was implemented then commitments become invalid on the fork, and people will lose money. Fun times ahead :-) Cheers, Christian On Thu, Aug 17, 2017 at 10:53 AM Martin Schwarz wrote: > Dear all, > > currently the chain_id allows to distinguish blockchains by the hash of > their genesis block. > > With hardforks branching off of the Bitcoin blockchain, how can Lightning > work on (or across) > distinct, permanent forks of a parent blockchain that share the same > genesis block? > > I suppose changing the definition of chain_id to the hash of the first > block of the new > branch and requiring replay and wipe-out protection should be sufficient. > But can we > relax these requirements? Are slow block times an issue? Can we use > Lightning to transact > on "almost frozen" block chains suffering from a sudden loss of hashpower? > > Has there been any previous discussion or study of Lightning in the > setting of hardforks? > (Is this the right place to discuss this? If not, where would be the right > place?) > > thanks, > Martin > _______________________________________________ > Lightning-dev mailing list > Lightning-dev at lists.linuxfoundation.org > https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev > _______________________________________________ Lightning-dev mailing list Lightning-dev at lists.linuxfoundation.org https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev -- - Bryan http://heybryan.org/ 1 512 203 0507 -------------- next part -------------- An HTML attachment was scrubbed... URL: From conrad.burchert at googlemail.com Thu Aug 17 12:48:15 2017 From: conrad.burchert at googlemail.com (Conrad Burchert) Date: Thu, 17 Aug 2017 14:48:15 +0200 Subject: [bitcoin-dev] Fwd: [Lightning-dev] Lightning in the setting of blockchain hardforks In-Reply-To: References: Message-ID: Some notes: Hardforks like Bitcoin ABC without a malleability fix are very unlikely to have payment channels, so the problem does not exist for those. The designers of a hardfork which does have a malleability fix will probably know about payment channels, so they can just build a replay protection that allows the execution of old commitments. That needs some kind of timestamping of commitments, which would have to be integrated in the channel design. The easiest way would be to just write the time of signing the commitment in the transaction and the replay protection accepts old commitments, but rejects one's which were signed after the hardfork. These timestamps can essentially be one bit (before or after a hardfork) and if the replay protection in the hardfork only accepts old commitments for something like a year, then it can be reused for more hardforks later on. Maybe someone comes up with an interesting way of doing this without using space. Nevertheless hardforking while having channels open will always be a mess as an open channel requires you to watch the blockchain. Anybody who is just not aware of the hardfork or is updating his client a few days too late, can get his money stolen by an old commitment transaction where he forgets to retaliate on the new chain. As other's can likely figure out your client version the risk of retaliation is not too big for an attacker. 2017-08-17 13:31 GMT+02:00 Bryan Bishop via bitcoin-dev < bitcoin-dev at lists.linuxfoundation.org>: > > ---------- Forwarded message ---------- > From: Christian Decker > Date: Thu, Aug 17, 2017 at 5:39 AM > Subject: Re: [Lightning-dev] Lightning in the setting of blockchain > hardforks > To: Martin Schwarz , lightning-dev at lists. > linuxfoundation.org > > > Hi Martin, > > this is the perfect venue to discuss this, welcome to the mailing list :-) > Like you I think that using the first forked block as the forkchain's > genesis block is the way to go, keeping the non-forked blockchain on the > original genesis hash, to avoid disruption. It may become more difficult in > the case one chain doesn't declare itself to be the forked chain. > > Even more interesting are channels that are open during the fork. In these > cases we open a single channel, and will have to settle two. If no replay > protection was implemented on the fork, then we can use the last commitment > to close the channel (updates should be avoided since they now double any > intended effect), if replay protection was implemented then commitments > become invalid on the fork, and people will lose money. > > Fun times ahead :-) > > Cheers, > Christian > > On Thu, Aug 17, 2017 at 10:53 AM Martin Schwarz > wrote: > >> Dear all, >> >> currently the chain_id allows to distinguish blockchains by the hash of >> their genesis block. >> >> With hardforks branching off of the Bitcoin blockchain, how can Lightning >> work on (or across) >> distinct, permanent forks of a parent blockchain that share the same >> genesis block? >> >> I suppose changing the definition of chain_id to the hash of the first >> block of the new >> branch and requiring replay and wipe-out protection should be sufficient. >> But can we >> relax these requirements? Are slow block times an issue? Can we use >> Lightning to transact >> on "almost frozen" block chains suffering from a sudden loss of hashpower? >> >> Has there been any previous discussion or study of Lightning in the >> setting of hardforks? >> (Is this the right place to discuss this? If not, where would be the >> right place?) >> >> thanks, >> Martin >> _______________________________________________ >> Lightning-dev mailing list >> Lightning-dev at lists.linuxfoundation.org >> https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev >> > > _______________________________________________ > Lightning-dev mailing list > Lightning-dev at lists.linuxfoundation.org > https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev > > > > > -- > - Bryan > http://heybryan.org/ > 1 512 203 0507 <(512)%20203-0507> > > _______________________________________________ > bitcoin-dev mailing list > bitcoin-dev at lists.linuxfoundation.org > https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From natanael.l at gmail.com Thu Aug 17 13:38:26 2017 From: natanael.l at gmail.com (Natanael) Date: Thu, 17 Aug 2017 15:38:26 +0200 Subject: [bitcoin-dev] Fwd: [Lightning-dev] Lightning in the setting of blockchain hardforks In-Reply-To: References: Message-ID: Couldn't scripts like this have a standardized "hardfork unroll" mechanism, where if a hardfork is activated and signaled to its clients, then those commitments that are only meant for their original chain can be reversed and undone just on the hardfork? Then the users involved would just send an unroll transaction which is only valid on the hardfork. - Sent from my phone Den 17 aug. 2017 14:52 skrev "Conrad Burchert via bitcoin-dev" < bitcoin-dev at lists.linuxfoundation.org>: > Some notes: > > Hardforks like Bitcoin ABC without a malleability fix are very unlikely to > have payment channels, so the problem does not exist for those. > > The designers of a hardfork which does have a malleability fix will > probably know about payment channels, so they can just build a replay > protection that allows the execution of old commitments. That needs some > kind of timestamping of commitments, which would have to be integrated in > the channel design. The easiest way would be to just write the time of > signing the commitment in the transaction and the replay protection accepts > old commitments, but rejects one's which were signed after the hardfork. > These timestamps can essentially be one bit (before or after a hardfork) > and if the replay protection in the hardfork only accepts old commitments > for something like a year, then it can be reused for more hardforks later > on. Maybe someone comes up with an interesting way of doing this without > using space. > > Nevertheless hardforking while having channels open will always be a mess > as an open channel requires you to watch the blockchain. Anybody who is > just not aware of the hardfork or is updating his client a few days too > late, can get his money stolen by an old commitment transaction where he > forgets to retaliate on the new chain. As other's can likely figure out > your client version the risk of retaliation is not too big for an attacker. > > > > 2017-08-17 13:31 GMT+02:00 Bryan Bishop via bitcoin-dev < > bitcoin-dev at lists.linuxfoundation.org>: > >> >> ---------- Forwarded message ---------- >> From: Christian Decker >> Date: Thu, Aug 17, 2017 at 5:39 AM >> Subject: Re: [Lightning-dev] Lightning in the setting of blockchain >> hardforks >> To: Martin Schwarz , >> lightning-dev at lists.linuxfoundation.org >> >> >> Hi Martin, >> >> this is the perfect venue to discuss this, welcome to the mailing list :-) >> Like you I think that using the first forked block as the forkchain's >> genesis block is the way to go, keeping the non-forked blockchain on the >> original genesis hash, to avoid disruption. It may become more difficult in >> the case one chain doesn't declare itself to be the forked chain. >> >> Even more interesting are channels that are open during the fork. In >> these cases we open a single channel, and will have to settle two. If no >> replay protection was implemented on the fork, then we can use the last >> commitment to close the channel (updates should be avoided since they now >> double any intended effect), if replay protection was implemented then >> commitments become invalid on the fork, and people will lose money. >> >> Fun times ahead :-) >> >> Cheers, >> Christian >> >> On Thu, Aug 17, 2017 at 10:53 AM Martin Schwarz >> wrote: >> >>> Dear all, >>> >>> currently the chain_id allows to distinguish blockchains by the hash of >>> their genesis block. >>> >>> With hardforks branching off of the Bitcoin blockchain, how can >>> Lightning work on (or across) >>> distinct, permanent forks of a parent blockchain that share the same >>> genesis block? >>> >>> I suppose changing the definition of chain_id to the hash of the first >>> block of the new >>> branch and requiring replay and wipe-out protection should be >>> sufficient. But can we >>> relax these requirements? Are slow block times an issue? Can we use >>> Lightning to transact >>> on "almost frozen" block chains suffering from a sudden loss of >>> hashpower? >>> >>> Has there been any previous discussion or study of Lightning in the >>> setting of hardforks? >>> (Is this the right place to discuss this? If not, where would be the >>> right place?) >>> >>> thanks, >>> Martin >>> _______________________________________________ >>> Lightning-dev mailing list >>> Lightning-dev at lists.linuxfoundation.org >>> https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev >>> >> >> _______________________________________________ >> Lightning-dev mailing list >> Lightning-dev at lists.linuxfoundation.org >> https://lists.linuxfoundation.org/mailman/listinfo/lightning-dev >> >> >> >> >> -- >> - Bryan >> http://heybryan.org/ >> 1 512 203 0507 <(512)%20203-0507> >> >> _______________________________________________ >> bitcoin-dev mailing list >> bitcoin-dev at lists.linuxfoundation.org >> https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev >> >> > > _______________________________________________ > bitcoin-dev mailing list > bitcoin-dev at lists.linuxfoundation.org > https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From achow101-lists at achow101.com Fri Aug 18 22:11:14 2017 From: achow101-lists at achow101.com (Andrew Chow) Date: Fri, 18 Aug 2017 22:11:14 +0000 Subject: [bitcoin-dev] [BIP Proposal] Partially Signed Bitcoin Transaction (PSBT) format Message-ID: Hi everyone, I would like to propose a standard format for unsigned and partially signed transactions. ===Abstract=== This document proposes a binary transaction format which contains the information necessary for a signer to produce signatures for the transaction and holds the signatures for an input while the input does not have a complete set of signatures. The signer can be offline as all necessary information will be provided in the transaction. ===Motivation=== Creating unsigned or partially signed transactions to be passed around to multiple signers is currently implementation dependent, making it hard for people who use different wallet software from being able to easily do so. One of the goals of this document is to create a standard and extensible format that can be used between clients to allow people to pass around the same transaction to sign and combine their signatures. The format is also designed to be easily extended for future use which is harder to do with existing transaction formats. Signing transactions also requires users to have access to the UTXOs being spent. This transaction format will allow offline signers such as air-gapped wallets and hardware wallets to be able to sign transactions without needing direct access to the UTXO set and without risk of being defrauded. The full text can be found here: https://github.com/achow101/bips/blob/bip-psbt/bip-psbt.mediawiki Andrew Chow -------------- next part -------------- An HTML attachment was scrubbed... URL: From kanzure at gmail.com Mon Aug 21 00:00:19 2017 From: kanzure at gmail.com (Bryan Bishop) Date: Sun, 20 Aug 2017 19:00:19 -0500 Subject: [bitcoin-dev] [BIP Proposal] Partially Signed Bitcoin Transaction (PSBT) format In-Reply-To: References: Message-ID: On Fri, Aug 18, 2017 at 5:11 PM, Andrew Chow via bitcoin-dev < bitcoin-dev at lists.linuxfoundation.org> wrote: > > I would like to propose a standard format for unsigned and partially > signed transactions. > Just a quick note but perhaps you and other readers would find this thread (on hardware wallet BIP drafting) to be tangentially related and useful: https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2016-August/013008.html - Bryan http://heybryan.org/ 1 512 203 0507 -------------- next part -------------- An HTML attachment was scrubbed... URL: From dermoth at aei.ca Mon Aug 21 13:35:22 2017 From: dermoth at aei.ca (Thomas Guyot-Sionnest) Date: Mon, 21 Aug 2017 09:35:22 -0400 Subject: [bitcoin-dev] UTXO growth scaling solution proposal In-Reply-To: References: Message-ID: <6e774a20-38f6-3932-4050-789c34f0c2b2@aei.ca> On 21/07/17 03:59 PM, Lucas Clemente Vella via bitcoin-dev wrote: > 2017-07-21 16:28 GMT-03:00 Major Kusanagi via bitcoin-dev > >: > > [...] But the fact is that if we want to make bitcoins last forever, > we have the accept unbounded UTXO growth, which is unscalable. So > the only solution is to limit UTXO growth, meaning bitcoins cannot > last forever. This proposed solution however does not prevent > Bitcoin from lasting forever. > > > Unless there is a logical contradiction in this phrasing, the proposed > solution does not improves scalability: > - "Bitcoins lasting forever" implies "unscalable"; > - "not prevent Bitcoin from lasting forever" implies "Bitcoins lasting > forever"; > - Thus: "not prevent Bitcoin from lasting forever" implies "unscalable". > > In practice, the only Bitcoin lost would be those whose owners forgot > about or has lost the keys, because everyone with a significant amount > of Bitcoins would always shift them around before it loses any luster (I > wouldn't bother to move my Bitcoins every 10 years). I don't know how to > estimate the percentage of UTXO is actually lost/forgotten, but I have > the opinion it isn't worth the hassle. > > As a side note, your estimate talks about block size, which is > determines blockchain size, which can be "safely" pruned (if you are not > considering new nodes might want to join the network, in case the full > history is needed to be stored somewhere). But UTXO size, albeit related > to the full blockchain size, is the part that currently can not be > safely pruned, so I don't see the relevance of the analysis. I think if we wanted to burn lost/stale coins a better approach would be returning them to miner's as a fee - there will always be lost coins and miners will be able to get that additional revenue stream as the mining reward halves. I also don't think we need to worry about doing a gradual value loss neither, we should just put a limit on UTXO age in block count (actually I would round it up to 210k blocks as explained below...). So lets say for example we decide to keep 5 210k blocks "generations" (that's over 15 years), then on the first block of the 6th generation all UTXO's from the 1st generation are invalidated and returned into a "pool". Given these (values in satoshis): Pool "P" (invalided UTXO minus total value reclaimed since last halving) Leftover blocks "B" (210,000 minus blocks mined since last halving) Then every mined block can reclaim FLOOR(P/B) satoshi in addition to miner's reward and tx fees. If the last block of a generation does not get the remainder of the pool (FLOOR(P/1) == P) it should get carried over. This would ensure we can clear old blocks after a few generations and that burnt/lost coins eventually get back in circulation. Also it would reduce the reliance of miners on actual TX fees. To avoid excessive miner reward initially, for the first few iterations the value of B could be increased (I haven't calculated the UTXO size of the first 210k blocks but it could be excessively high...) or the value each block can reclaim could be caped (so we would reclaim at an artificial capacity until the pool depletes...). Regards, -- Thomas From ethan.scruples at gmail.com Mon Aug 21 14:26:35 2017 From: ethan.scruples at gmail.com (Moral Agent) Date: Mon, 21 Aug 2017 10:26:35 -0400 Subject: [bitcoin-dev] UTXO growth scaling solution proposal In-Reply-To: <6e774a20-38f6-3932-4050-789c34f0c2b2@aei.ca> References: <6e774a20-38f6-3932-4050-789c34f0c2b2@aei.ca> Message-ID: A more forgiving option would be to have coins past a certain age evaporate into mining rewards at some rate, rather than all at once. People might find this approach easier to stomach as it avoids the "I waited 1 block to many and all of my coins vanished" scenario. Another approach would to demand that a certain minimum mining fee be included that is calculated based on the age of an input like this idea: https://www.reddit.com/r/Bitcoin/comments/35ilir/prioritizing_utxos_using_a_minimum_mining_fee/ This would result in the coins continuing to exist but not being economically spendable, and therefore the UTXO information could be archived. On Mon, Aug 21, 2017 at 9:35 AM, Thomas Guyot-Sionnest via bitcoin-dev < bitcoin-dev at lists.linuxfoundation.org> wrote: > On 21/07/17 03:59 PM, Lucas Clemente Vella via bitcoin-dev wrote: > > 2017-07-21 16:28 GMT-03:00 Major Kusanagi via bitcoin-dev > > > >: > > > > [...] But the fact is that if we want to make bitcoins last forever, > > we have the accept unbounded UTXO growth, which is unscalable. So > > the only solution is to limit UTXO growth, meaning bitcoins cannot > > last forever. This proposed solution however does not prevent > > Bitcoin from lasting forever. > > > > > > Unless there is a logical contradiction in this phrasing, the proposed > > solution does not improves scalability: > > - "Bitcoins lasting forever" implies "unscalable"; > > - "not prevent Bitcoin from lasting forever" implies "Bitcoins lasting > > forever"; > > - Thus: "not prevent Bitcoin from lasting forever" implies "unscalable". > > > > In practice, the only Bitcoin lost would be those whose owners forgot > > about or has lost the keys, because everyone with a significant amount > > of Bitcoins would always shift them around before it loses any luster (I > > wouldn't bother to move my Bitcoins every 10 years). I don't know how to > > estimate the percentage of UTXO is actually lost/forgotten, but I have > > the opinion it isn't worth the hassle. > > > > As a side note, your estimate talks about block size, which is > > determines blockchain size, which can be "safely" pruned (if you are not > > considering new nodes might want to join the network, in case the full > > history is needed to be stored somewhere). But UTXO size, albeit related > > to the full blockchain size, is the part that currently can not be > > safely pruned, so I don't see the relevance of the analysis. > > I think if we wanted to burn lost/stale coins a better approach would be > returning them to miner's as a fee - there will always be lost coins and > miners will be able to get that additional revenue stream as the mining > reward halves. I also don't think we need to worry about doing a gradual > value loss neither, we should just put a limit on UTXO age in block > count (actually I would round it up to 210k blocks as explained below...). > > > So lets say for example we decide to keep 5 210k blocks "generations" > (that's over 15 years), then on the first block of the 6th generation > all UTXO's from the 1st generation are invalidated and returned into a > "pool". > > Given these (values in satoshis): > > Pool "P" (invalided UTXO minus total value reclaimed since last halving) > Leftover blocks "B" (210,000 minus blocks mined since last halving) > > Then every mined block can reclaim FLOOR(P/B) satoshi in addition to > miner's reward and tx fees. > > If the last block of a generation does not get the remainder of the pool > (FLOOR(P/1) == P) it should get carried over. > > > This would ensure we can clear old blocks after a few generations and > that burnt/lost coins eventually get back in circulation. Also it would > reduce the reliance of miners on actual TX fees. > > > To avoid excessive miner reward initially, for the first few iterations > the value of B could be increased (I haven't calculated the UTXO size of > the first 210k blocks but it could be excessively high...) or the value > each block can reclaim could be caped (so we would reclaim at an > artificial capacity until the pool depletes...). > > > Regards, > > -- > Thomas > > _______________________________________________ > bitcoin-dev mailing list > bitcoin-dev at lists.linuxfoundation.org > https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From erik at q32.com Mon Aug 21 17:24:09 2017 From: erik at q32.com (Erik Aronesty) Date: Mon, 21 Aug 2017 13:24:09 -0400 Subject: [bitcoin-dev] UTXO growth scaling solution proposal In-Reply-To: References: <6e774a20-38f6-3932-4050-789c34f0c2b2@aei.ca> Message-ID: 1. If it only affects "old dust" UTXO's where the # of coins in the UTXO aren't sufficient to pay some lower quantile of transaction fees, then there can be little argument of theft or loss. 2. There's another use-case for demurrage as well. Computation power may grow rapidly if quantum computing becomes more common. At some point, Bitcoin may have to change the public key format for coins and the POW used. In order to do this, old coins will have to transact on the network, moving their value to a new format, with many more bits in the public key, for example. But since quantum computing isn't bounded by moore's law, so this may need to be a regular upgrade every X years. Rather than a regular "bit widening hard fork", the number of bits needed in a public address format could be scaled to the difficulty of the new quantum hashing algorithm that *also must *now grow in the # of bits over time. To ensure that coins are secure, those with too few bits must drop off the network. So the timing for old coin demurrage can effectively be based on the quantum POW difficulty adjustments. As long as the subsequent exponential rate of computation increase can be reasonably predicted (quantum version of moore's law), the new rate of decay can be pegged to a number of years. On Mon, Aug 21, 2017 at 10:26 AM, Moral Agent via bitcoin-dev < bitcoin-dev at lists.linuxfoundation.org> wrote: > A more forgiving option would be to have coins past a certain age > evaporate into mining rewards at some rate, rather than all at once. People > might find this approach easier to stomach as it avoids the "I waited 1 > block to many and all of my coins vanished" scenario. > > Another approach would to demand that a certain minimum mining fee be > included that is calculated based on the age of an input like this idea: > https://www.reddit.com/r/Bitcoin/comments/35ilir/ > prioritizing_utxos_using_a_minimum_mining_fee/ > > This would result in the coins continuing to exist but not being > economically spendable, and therefore the UTXO information could be > archived. > > On Mon, Aug 21, 2017 at 9:35 AM, Thomas Guyot-Sionnest via bitcoin-dev < > bitcoin-dev at lists.linuxfoundation.org> wrote: > >> On 21/07/17 03:59 PM, Lucas Clemente Vella via bitcoin-dev wrote: >> > 2017-07-21 16:28 GMT-03:00 Major Kusanagi via bitcoin-dev >> > > > >: >> > >> > [...] But the fact is that if we want to make bitcoins last forever, >> > we have the accept unbounded UTXO growth, which is unscalable. So >> > the only solution is to limit UTXO growth, meaning bitcoins cannot >> > last forever. This proposed solution however does not prevent >> > Bitcoin from lasting forever. >> > >> > >> > Unless there is a logical contradiction in this phrasing, the proposed >> > solution does not improves scalability: >> > - "Bitcoins lasting forever" implies "unscalable"; >> > - "not prevent Bitcoin from lasting forever" implies "Bitcoins lasting >> > forever"; >> > - Thus: "not prevent Bitcoin from lasting forever" implies >> "unscalable". >> > >> > In practice, the only Bitcoin lost would be those whose owners forgot >> > about or has lost the keys, because everyone with a significant amount >> > of Bitcoins would always shift them around before it loses any luster (I >> > wouldn't bother to move my Bitcoins every 10 years). I don't know how to >> > estimate the percentage of UTXO is actually lost/forgotten, but I have >> > the opinion it isn't worth the hassle. >> > >> > As a side note, your estimate talks about block size, which is >> > determines blockchain size, which can be "safely" pruned (if you are not >> > considering new nodes might want to join the network, in case the full >> > history is needed to be stored somewhere). But UTXO size, albeit related >> > to the full blockchain size, is the part that currently can not be >> > safely pruned, so I don't see the relevance of the analysis. >> >> I think if we wanted to burn lost/stale coins a better approach would be >> returning them to miner's as a fee - there will always be lost coins and >> miners will be able to get that additional revenue stream as the mining >> reward halves. I also don't think we need to worry about doing a gradual >> value loss neither, we should just put a limit on UTXO age in block >> count (actually I would round it up to 210k blocks as explained below...). >> >> >> So lets say for example we decide to keep 5 210k blocks "generations" >> (that's over 15 years), then on the first block of the 6th generation >> all UTXO's from the 1st generation are invalidated and returned into a >> "pool". >> >> Given these (values in satoshis): >> >> Pool "P" (invalided UTXO minus total value reclaimed since last halving) >> Leftover blocks "B" (210,000 minus blocks mined since last halving) >> >> Then every mined block can reclaim FLOOR(P/B) satoshi in addition to >> miner's reward and tx fees. >> >> If the last block of a generation does not get the remainder of the pool >> (FLOOR(P/1) == P) it should get carried over. >> >> >> This would ensure we can clear old blocks after a few generations and >> that burnt/lost coins eventually get back in circulation. Also it would >> reduce the reliance of miners on actual TX fees. >> >> >> To avoid excessive miner reward initially, for the first few iterations >> the value of B could be increased (I haven't calculated the UTXO size of >> the first 210k blocks but it could be excessively high...) or the value >> each block can reclaim could be caped (so we would reclaim at an >> artificial capacity until the pool depletes...). >> >> >> Regards, >> >> -- >> Thomas >> >> _______________________________________________ >> bitcoin-dev mailing list >> bitcoin-dev at lists.linuxfoundation.org >> https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev >> > > > _______________________________________________ > bitcoin-dev mailing list > bitcoin-dev at lists.linuxfoundation.org > https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From gsanders87 at gmail.com Mon Aug 21 18:12:47 2017 From: gsanders87 at gmail.com (Greg Sanders) Date: Mon, 21 Aug 2017 11:12:47 -0700 Subject: [bitcoin-dev] [BIP Proposal] Partially Signed Bitcoin Transaction (PSBT) format In-Reply-To: References: Message-ID: Some related thoughts and suggestion for an extension that kanzure suggested I post here: Hardware Wallet attacks by input ownership omission and fix ---------------------------------------------------------------------------------- So a while back I realized that to have HW wallets do safe automated coinjoins(without any user interaction be sure there are no fee dumps or handing money to others) you have to protect yourself from the case of signing one set of inputs while the other owned set is hidden from the device, then repeating the same action with the two sets reversed. Note that there is no support for such a mode in HW wallets today, but could possibly greatly increase liquidity of JoinMarket like systems. First signing pass: 1 BTC (yours, host tells ledger about it) -------- > 1.5 BTC 1 BTC (yours, host fails to tell ledger about it)- Second signing pass: 1 BTC (yours, host fails to tell ledger) --------- > 1.5 BTC 1 BTC (yours, host tells ledger about it)--------- In this scenario, you sign the first input, thinking "great I'm getting 0.5 BTC for running coinjoin" when in reality this will simply be re-played again later with the inputs switched, *costing* you 0.5 BTC. (Ledger doesn't support "negative fees", but imagine more more inputs are included that aren't yours.) More recently I noticed a more common issue along the same lines: With Segwit inputs, the entire transaction referred to in the prevout is generally no longer included for HW wallet signing API. This greatly speeds up signing since potentially multiple MBs of transactions are no longer passed into the device, but comes with a cost: An attacker can claim certain inputs' value is much lower than it actually is. In the first pass, the host reports the first input's value properly, and the second as lower. The signature on the first input will go through fine(value included in the sighash is only for that input), then attacker prompts a restart of signing, reporting the 2nd value properly, and first value improperly low, which allows the attacker to report the same fee twice on the device. Both signatures over each input are correct, but the user was prompted with an invalid fee amount(too low). To fix this I consulted with andytoshi and got something we think works for both cases: 1) When a signing device receives a partially signed transaction, all inputs must come with a ownership proof: - For the input at address A, a signature over H(A || x) using the key for A. 'x' is some private fixed key that only the signing device knows(most likely some privkey along some unique bip32 path). - For each input ownership proof, the HW wallet validates each signature over the hashed message, then attempts to "decode" the hash by applying its own 'x'. If the hash doesn't match, it cannot be its own input. - Sign for every input that is yours This at a minimum makes sure that the wallet's total "balance" will not go down more than the reported fee. Benefits: - Still small memory footprint compared to legacy signing - Allows user-interactionless coinjoins without putting funds at risk - These proofs can be created at any time, collected at the front of any CoinJoin like protocol. - These proofs can be passed around as additional fields for Partially Signed Bitcoin Transactions: https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2017-August/014838.html On Sun, Aug 20, 2017 at 5:00 PM, Bryan Bishop via bitcoin-dev < bitcoin-dev at lists.linuxfoundation.org> wrote: > On Fri, Aug 18, 2017 at 5:11 PM, Andrew Chow via bitcoin-dev < > bitcoin-dev at lists.linuxfoundation.org> wrote: >> >> I would like to propose a standard format for unsigned and partially >> signed transactions. >> > > Just a quick note but perhaps you and other readers would find this thread > (on hardware wallet BIP drafting) to be tangentially related and useful: > https://lists.linuxfoundation.org/pipermail/bitcoin-dev/ > 2016-August/013008.html > > - Bryan > http://heybryan.org/ > 1 512 203 0507 <(512)%20203-0507> > > _______________________________________________ > bitcoin-dev mailing list > bitcoin-dev at lists.linuxfoundation.org > https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From hoenicke at gmail.com Mon Aug 21 21:36:24 2017 From: hoenicke at gmail.com (Jochen Hoenicke) Date: Mon, 21 Aug 2017 23:36:24 +0200 Subject: [bitcoin-dev] [BIP Proposal] Partially Signed Bitcoin Transaction (PSBT) format In-Reply-To: References: Message-ID: <5f67d70d-a432-7826-22df-4207580aa1d2@gmail.com> On 21.08.2017 20:12, Greg Sanders via bitcoin-dev wrote: > To fix this I consulted with andytoshi and got something we think works > for both cases: > > 1) When a signing device receives a partially signed transaction, all > inputs must come with a ownership proof: > - For the input at address A, a signature over H(A || x) using the key > for A. 'x' is some private fixed key that only the signing device > knows(most likely some privkey along some unique bip32 path). > - For each input ownership proof, the HW wallet validates each signature > over the hashed message, then attempts to "decode" the hash by applying > its own 'x'. If the hash doesn't match, it cannot be its own input. > - Sign for every input that is yours Interesting, basically a proof of non-ownership :), a proof that the hardware wallet doesn't own the address. But shouldn't x be public, so that the device can verify the signature? Can you expand on this, what is exactly signed with which key and how is it checked? One also has to make sure that it's not possible to reuse signatures as ownership proof that were made for a different purpose. Jochen From matthew.beton at gmail.com Tue Aug 22 08:19:26 2017 From: matthew.beton at gmail.com (Matthew Beton) Date: Tue, 22 Aug 2017 08:19:26 +0000 Subject: [bitcoin-dev] UTXO growth scaling solution proposal Message-ID: Okay so I quite like this idea. If we start removing at height 630000 or 840000 (gives us 4-8 years to develop this solution), it stays nice and neat with the halving interval. We can look at this like so: B - the current block number P - how many blocks behind current the coin burning block is. (630000, 840000, or otherwise.) Every time we mine a new block, we go to block (B-P), and check for stale coins. These coins get burnt up and pooled into block B's miner fees. This keeps the mining rewards up in the long term, people are less likely to stop mining due to too low fees. It also encourages people to keep moving their money around the enconomy instead of just hording and leaving it. -------------- next part -------------- An HTML attachment was scrubbed... URL: From criley at gmail.com Tue Aug 22 13:45:12 2017 From: criley at gmail.com (Chris Riley) Date: Tue, 22 Aug 2017 09:45:12 -0400 Subject: [bitcoin-dev] UTXO growth scaling solution proposal In-Reply-To: References: Message-ID: This seems to be drifting off into alt-coin discussion. The idea that we can change the rules and steal coins at a later date because they are "stale" or someone is "hoarding" is antithetical to one of the points of bitcoin in that you can no longer control your own money ("be your own bank") because someone can at a later date take your coins for some reason that is outside your control and solely based on some rationalization by a third party. Once the rule is established that there are valid reasons why someone should not have control of their own bitcoins, what other reasons will then be determined to be valid? I can imagine Hal Finney being revived (he was cryo-preserved at Alcor if you aren't aware) after 100 or 200 years expecting his coins to be there only to find out that his coins were deemed "stale" so were "reclaimed" (in the current doublespeak - e.g. stolen or confiscated). Or perhaps he locked some for his children and they are found to be "stale" before they are available. He said in March 2013, "I think they're safe enough" stored in a paper wallet. Perhaps any remaining coins are no longer "safe enough." Again, this seems (a) more about an alt-coin/bitcoin fork or (b) better in bitcoin-discuss at best vs bitcoin-dev. I've seen it discussed many times since 2010 and still do not agree with the rational that embracing allowing someone to steal someone else's coins for any reason is a useful change to bitcoin. On Tue, Aug 22, 2017 at 4:19 AM, Matthew Beton via bitcoin-dev < bitcoin-dev at lists.linuxfoundation.org> wrote: > Okay so I quite like this idea. If we start removing at height 630000 or > 840000 (gives us 4-8 years to develop this solution), it stays nice and > neat with the halving interval. We can look at this like so: > > B - the current block number > P - how many blocks behind current the coin burning block is. (630000, > 840000, or otherwise.) > > Every time we mine a new block, we go to block (B-P), and check for stale > coins. These coins get burnt up and pooled into block B's miner fees. This > keeps the mining rewards up in the long term, people are less likely to > stop mining due to too low fees. It also encourages people to keep moving > their money around the enconomy instead of just hording and leaving it. > _______________________________________________ > bitcoin-dev mailing list > bitcoin-dev at lists.linuxfoundation.org > https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From matthew.beton at gmail.com Tue Aug 22 14:04:49 2017 From: matthew.beton at gmail.com (Matthew Beton) Date: Tue, 22 Aug 2017 14:04:49 +0000 Subject: [bitcoin-dev] UTXO growth scaling solution proposal In-Reply-To: References: Message-ID: Ok, I see your point. I was just thinking about the number of bitcoins tied up in wallets in which people lost the keys, but I suppose this isn't so much of a problem if it's well known that the bitcoins are all tied up. It would be impossible to distinguish between bitcoins people have lost access to, and bitcoins that people have just left in the same wallet for a long time. On Tue, 22 Aug 2017, 3:45 pm Chris Riley wrote: > This seems to be drifting off into alt-coin discussion. The idea that we > can change the rules and steal coins at a later date because they are > "stale" or someone is "hoarding" is antithetical to one of the points of > bitcoin in that you can no longer control your own money ("be your own > bank") because someone can at a later date take your coins for some reason > that is outside your control and solely based on some rationalization by a > third party. Once the rule is established that there are valid reasons why > someone should not have control of their own bitcoins, what other reasons > will then be determined to be valid? > > I can imagine Hal Finney being revived (he was cryo-preserved at Alcor if > you aren't aware) after 100 or 200 years expecting his coins to be there > only to find out that his coins were deemed "stale" so were "reclaimed" (in > the current doublespeak - e.g. stolen or confiscated). Or perhaps he > locked some for his children and they are found to be "stale" before they > are available. He said in March 2013, "I think they're safe enough" stored > in a paper wallet. Perhaps any remaining coins are no longer "safe enough." > > Again, this seems (a) more about an alt-coin/bitcoin fork or (b) better in > bitcoin-discuss at best vs bitcoin-dev. I've seen it discussed many times > since 2010 and still do not agree with the rational that embracing allowing > someone to steal someone else's coins for any reason is a useful change to > bitcoin. > > > > > On Tue, Aug 22, 2017 at 4:19 AM, Matthew Beton via bitcoin-dev < > bitcoin-dev at lists.linuxfoundation.org> wrote: > >> Okay so I quite like this idea. If we start removing at height 630000 or >> 840000 (gives us 4-8 years to develop this solution), it stays nice and >> neat with the halving interval. We can look at this like so: >> >> B - the current block number >> P - how many blocks behind current the coin burning block is. (630000, >> 840000, or otherwise.) >> >> Every time we mine a new block, we go to block (B-P), and check for stale >> coins. These coins get burnt up and pooled into block B's miner fees. This >> keeps the mining rewards up in the long term, people are less likely to >> stop mining due to too low fees. It also encourages people to keep moving >> their money around the enconomy instead of just hording and leaving it. >> > _______________________________________________ >> bitcoin-dev mailing list >> bitcoin-dev at lists.linuxfoundation.org >> https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev >> >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From erik at q32.com Tue Aug 22 14:29:26 2017 From: erik at q32.com (Erik Aronesty) Date: Tue, 22 Aug 2017 10:29:26 -0400 Subject: [bitcoin-dev] UTXO growth scaling solution proposal In-Reply-To: References: Message-ID: I agree, it is only a good idea in the event of a quantum computing threat to the security of Bitcoin. On Tue, Aug 22, 2017 at 9:45 AM, Chris Riley via bitcoin-dev < bitcoin-dev at lists.linuxfoundation.org> wrote: > This seems to be drifting off into alt-coin discussion. The idea that we > can change the rules and steal coins at a later date because they are > "stale" or someone is "hoarding" is antithetical to one of the points of > bitcoin in that you can no longer control your own money ("be your own > bank") because someone can at a later date take your coins for some reason > that is outside your control and solely based on some rationalization by a > third party. Once the rule is established that there are valid reasons why > someone should not have control of their own bitcoins, what other reasons > will then be determined to be valid? > > I can imagine Hal Finney being revived (he was cryo-preserved at Alcor if > you aren't aware) after 100 or 200 years expecting his coins to be there > only to find out that his coins were deemed "stale" so were "reclaimed" (in > the current doublespeak - e.g. stolen or confiscated). Or perhaps he > locked some for his children and they are found to be "stale" before they > are available. He said in March 2013, "I think they're safe enough" stored > in a paper wallet. Perhaps any remaining coins are no longer "safe enough." > > Again, this seems (a) more about an alt-coin/bitcoin fork or (b) better in > bitcoin-discuss at best vs bitcoin-dev. I've seen it discussed many times > since 2010 and still do not agree with the rational that embracing allowing > someone to steal someone else's coins for any reason is a useful change to > bitcoin. > > > > > On Tue, Aug 22, 2017 at 4:19 AM, Matthew Beton via bitcoin-dev < > bitcoin-dev at lists.linuxfoundation.org> wrote: > >> Okay so I quite like this idea. If we start removing at height 630000 or >> 840000 (gives us 4-8 years to develop this solution), it stays nice and >> neat with the halving interval. We can look at this like so: >> >> B - the current block number >> P - how many blocks behind current the coin burning block is. (630000, >> 840000, or otherwise.) >> >> Every time we mine a new block, we go to block (B-P), and check for stale >> coins. These coins get burnt up and pooled into block B's miner fees. This >> keeps the mining rewards up in the long term, people are less likely to >> stop mining due to too low fees. It also encourages people to keep moving >> their money around the enconomy instead of just hording and leaving it. >> _______________________________________________ >> bitcoin-dev mailing list >> bitcoin-dev at lists.linuxfoundation.org >> https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev >> >> > > _______________________________________________ > bitcoin-dev mailing list > bitcoin-dev at lists.linuxfoundation.org > https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From dermoth at aei.ca Tue Aug 22 17:24:05 2017 From: dermoth at aei.ca (Thomas Guyot-Sionnest) Date: Tue, 22 Aug 2017 13:24:05 -0400 Subject: [bitcoin-dev] UTXO growth scaling solution proposal In-Reply-To: References: Message-ID: <4c39bee6-f419-2e36-62a8-d38171b15558@aei.ca> In any case when Hal Finney do not wake up from his 200years cryo-preservation (because unfortunately for him 200 years earlier they did not know how to preserve a body well enough to resurrect it) he would find that advance in computer technology made it trivial for anyone to steal his coins using the long-obsolete secp256k1 ec curve (which was done long before, as soon as it became profitable to crack down the huge stash of coins stale in the early blocks) I just don't get that argument that you can't be "your own bank". The only requirement coming from this would be to move your coins about once every 10 years or so, which you should be able to do if you have your private keys (you should!). You say it may be something to consider when computer breakthroughs makes old outputs vulnerable, but I say it's not "if" but "when" it happens, and by telling firsthand people that their coins requires moving every once in a long while you ensure they won't do stupid things or come back 50 years from now and complain their addresses have been scavenged. -- Thomas On 22/08/17 10:29 AM, Erik Aronesty via bitcoin-dev wrote: > I agree, it is only a good idea in the event of a quantum computing > threat to the security of Bitcoin. > > On Tue, Aug 22, 2017 at 9:45 AM, Chris Riley via bitcoin-dev > > wrote: > > This seems to be drifting off into alt-coin discussion. The idea > that we can change the rules and steal coins at a later date > because they are "stale" or someone is "hoarding" is antithetical > to one of the points of bitcoin in that you can no longer control > your own money ("be your own bank") because someone can at a later > date take your coins for some reason that is outside your control > and solely based on some rationalization by a third party. Once > the rule is established that there are valid reasons why someone > should not have control of their own bitcoins, what other reasons > will then be determined to be valid? > > I can imagine Hal Finney being revived (he was cryo-preserved at > Alcor if you aren't aware) after 100 or 200 years expecting his > coins to be there only to find out that his coins were deemed > "stale" so were "reclaimed" (in the current doublespeak - e.g. > stolen or confiscated). Or perhaps he locked some for his > children and they are found to be "stale" before they are > available. He said in March 2013, "I think they're safe enough" > stored in a paper wallet. Perhaps any remaining coins are no > longer "safe enough." > > Again, this seems (a) more about an alt-coin/bitcoin fork or (b) > better in bitcoin-discuss at best vs bitcoin-dev. I've seen it > discussed many times since 2010 and still do not agree with the > rational that embracing allowing someone to steal someone else's > coins for any reason is a useful change to bitcoin. > > > > > On Tue, Aug 22, 2017 at 4:19 AM, Matthew Beton via bitcoin-dev > > wrote: > > Okay so I quite like this idea. If we start removing at height > 630000 or 840000 (gives us 4-8 years to develop this > solution), it stays nice and neat with the halving interval. > We can look at this like so: > > B - the current block number > P - how many blocks behind current the coin burning block is. > (630000, 840000, or otherwise.) > > Every time we mine a new block, we go to block (B-P), and > check for stale coins. These coins get burnt up and pooled > into block B's miner fees. This keeps the mining rewards up in > the long term, people are less likely to stop mining due to > too low fees. It also encourages people to keep moving their > money around the enconomy instead of just hording and leaving it. > -------------- next part -------------- An HTML attachment was scrubbed... URL: From matthew.beton at gmail.com Tue Aug 22 17:33:41 2017 From: matthew.beton at gmail.com (Matthew Beton) Date: Tue, 22 Aug 2017 17:33:41 +0000 Subject: [bitcoin-dev] UTXO growth scaling solution proposal In-Reply-To: <4c39bee6-f419-2e36-62a8-d38171b15558@aei.ca> References: <4c39bee6-f419-2e36-62a8-d38171b15558@aei.ca> Message-ID: Very true, if Moore's law is still functional in 200 years, computers will be 2^100 times faster (possibly more if quantum computing becomes commonplace), and so old wallets may be easily cracked. We will need a way to force people to use newer, higher security wallets, and turning coins to mining rewards is better solution than them just being hacked. On Tue, 22 Aug 2017, 7:24 pm Thomas Guyot-Sionnest wrote: > In any case when Hal Finney do not wake up from his 200years > cryo-preservation (because unfortunately for him 200 years earlier they did > not know how to preserve a body well enough to resurrect it) he would find > that advance in computer technology made it trivial for anyone to steal his > coins using the long-obsolete secp256k1 ec curve (which was done long > before, as soon as it became profitable to crack down the huge stash of > coins stale in the early blocks) > > I just don't get that argument that you can't be "your own bank". The only > requirement coming from this would be to move your coins about once every > 10 years or so, which you should be able to do if you have your private > keys (you should!). You say it may be something to consider when computer > breakthroughs makes old outputs vulnerable, but I say it's not "if" but > "when" it happens, and by telling firsthand people that their coins > requires moving every once in a long while you ensure they won't do stupid > things or come back 50 years from now and complain their addresses have > been scavenged. > > -- > Thomas > > > On 22/08/17 10:29 AM, Erik Aronesty via bitcoin-dev wrote: > > I agree, it is only a good idea in the event of a quantum computing threat > to the security of Bitcoin. > > On Tue, Aug 22, 2017 at 9:45 AM, Chris Riley via bitcoin-dev < > bitcoin-dev at lists.linuxfoundation.org> wrote: > >> This seems to be drifting off into alt-coin discussion. The idea that we >> can change the rules and steal coins at a later date because they are >> "stale" or someone is "hoarding" is antithetical to one of the points of >> bitcoin in that you can no longer control your own money ("be your own >> bank") because someone can at a later date take your coins for some reason >> that is outside your control and solely based on some rationalization by a >> third party. Once the rule is established that there are valid reasons why >> someone should not have control of their own bitcoins, what other reasons >> will then be determined to be valid? >> >> I can imagine Hal Finney being revived (he was cryo-preserved at Alcor if >> you aren't aware) after 100 or 200 years expecting his coins to be there >> only to find out that his coins were deemed "stale" so were "reclaimed" (in >> the current doublespeak - e.g. stolen or confiscated). Or perhaps he >> locked some for his children and they are found to be "stale" before they >> are available. He said in March 2013, "I think they're safe enough" stored >> in a paper wallet. Perhaps any remaining coins are no longer "safe enough." >> >> Again, this seems (a) more about an alt-coin/bitcoin fork or (b) better >> in bitcoin-discuss at best vs bitcoin-dev. I've seen it discussed many >> times since 2010 and still do not agree with the rational that embracing >> allowing someone to steal someone else's coins for any reason is a useful >> change to bitcoin. >> >> >> >> >> On Tue, Aug 22, 2017 at 4:19 AM, Matthew Beton via bitcoin-dev < >> bitcoin-dev at lists.linuxfoundation.org> wrote: >> >>> Okay so I quite like this idea. If we start removing at height 630000 or >>> 840000 (gives us 4-8 years to develop this solution), it stays nice and >>> neat with the halving interval. We can look at this like so: >>> >>> B - the current block number >>> P - how many blocks behind current the coin burning block is. (630000, >>> 840000, or otherwise.) >>> >>> Every time we mine a new block, we go to block (B-P), and check for >>> stale coins. These coins get burnt up and pooled into block B's miner fees. >>> This keeps the mining rewards up in the long term, people are less likely >>> to stop mining due to too low fees. It also encourages people to keep >>> moving their money around the enconomy instead of just hording and leaving >>> it. >>> >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From r8921039 at hotmail.com Tue Aug 22 18:18:13 2017 From: r8921039 at hotmail.com (DANIEL YIN) Date: Tue, 22 Aug 2017 18:18:13 +0000 Subject: [bitcoin-dev] bitcoin-dev Digest, Vol 27, Issue 10 In-Reply-To: References: Message-ID: > Very true, if Moore's law is still functional in 200 years, computers will > be 2^100 times faster (possibly more if quantum computing becomes > commonplace), and so old wallets may be easily cracked. > > We will need a way to force people to use newer, higher security wallets, > and turning coins to mining rewards is better solution than them just being > hacked. Even in such an event, my personal view is the bitcoin owner should have the freedom to choose upgrade to secure his/her coins or to leave the door open for the first hacker to assume the coins - yet the bitcoin network that he/she trusts should not act like a hacker to assume his/her coins. daniel -------------- next part -------------- An HTML attachment was scrubbed... URL: From criley at gmail.com Tue Aug 22 18:55:15 2017 From: criley at gmail.com (Chris Riley) Date: Tue, 22 Aug 2017 14:55:15 -0400 Subject: [bitcoin-dev] UTXO growth scaling solution proposal In-Reply-To: References: <4c39bee6-f419-2e36-62a8-d38171b15558@aei.ca> Message-ID: The initial message I replied to stated in part, "Okay so I quite like this idea. If we start removing at height 630000 or 840000 (gives us 4-8 years to develop this solution), it stays nice and neat with the halving interval...." That is less than 3 years or less than 7 years away. Much sooner than it is believed QC or Moore's law could impact bitcoin. Changing bitcoin so as to require that early coins start getting "scavenged" at that date seems unneeded and irresponsible. Besides, your ECDSA is only revealed when you spend the coins which does provide some quantum resistance. Hal was just an example of people putting their coins away expecting them to be there at X years in the future, whether it is for himself or for his kids and wife. :-) On Tue, Aug 22, 2017 at 1:33 PM, Matthew Beton wrote: > Very true, if Moore's law is still functional in 200 years, computers will > be 2^100 times faster (possibly more if quantum computing becomes > commonplace), and so old wallets may be easily cracked. > > We will need a way to force people to use newer, higher security wallets, > and turning coins to mining rewards is better solution than them just being > hacked. > > On Tue, 22 Aug 2017, 7:24 pm Thomas Guyot-Sionnest wrote: > >> In any case when Hal Finney do not wake up from his 200years >> cryo-preservation (because unfortunately for him 200 years earlier they did >> not know how to preserve a body well enough to resurrect it) he would find >> that advance in computer technology made it trivial for anyone to steal his >> coins using the long-obsolete secp256k1 ec curve (which was done long >> before, as soon as it became profitable to crack down the huge stash of >> coins stale in the early blocks) >> >> I just don't get that argument that you can't be "your own bank". The >> only requirement coming from this would be to move your coins about once >> every 10 years or so, which you should be able to do if you have your >> private keys (you should!). You say it may be something to consider when >> computer breakthroughs makes old outputs vulnerable, but I say it's not >> "if" but "when" it happens, and by telling firsthand people that their >> coins requires moving every once in a long while you ensure they won't do >> stupid things or come back 50 years from now and complain their addresses >> have been scavenged. >> >> -- >> Thomas >> >> >> On 22/08/17 10:29 AM, Erik Aronesty via bitcoin-dev wrote: >> >> I agree, it is only a good idea in the event of a quantum computing >> threat to the security of Bitcoin. >> >> On Tue, Aug 22, 2017 at 9:45 AM, Chris Riley via bitcoin-dev < >> bitcoin-dev at lists.linuxfoundation.org> wrote: >> >>> This seems to be drifting off into alt-coin discussion. The idea that >>> we can change the rules and steal coins at a later date because they are >>> "stale" or someone is "hoarding" is antithetical to one of the points of >>> bitcoin in that you can no longer control your own money ("be your own >>> bank") because someone can at a later date take your coins for some reason >>> that is outside your control and solely based on some rationalization by a >>> third party. Once the rule is established that there are valid reasons why >>> someone should not have control of their own bitcoins, what other reasons >>> will then be determined to be valid? >>> >>> I can imagine Hal Finney being revived (he was cryo-preserved at Alcor >>> if you aren't aware) after 100 or 200 years expecting his coins to be there >>> only to find out that his coins were deemed "stale" so were "reclaimed" (in >>> the current doublespeak - e.g. stolen or confiscated). Or perhaps he >>> locked some for his children and they are found to be "stale" before they >>> are available. He said in March 2013, "I think they're safe enough" stored >>> in a paper wallet. Perhaps any remaining coins are no longer "safe enough." >>> >>> Again, this seems (a) more about an alt-coin/bitcoin fork or (b) better >>> in bitcoin-discuss at best vs bitcoin-dev. I've seen it discussed many >>> times since 2010 and still do not agree with the rational that embracing >>> allowing someone to steal someone else's coins for any reason is a useful >>> change to bitcoin. >>> >>> >>> >>> >>> On Tue, Aug 22, 2017 at 4:19 AM, Matthew Beton via bitcoin-dev < >>> bitcoin-dev at lists.linuxfoundation.org> wrote: >>> >>>> Okay so I quite like this idea. If we start removing at height 630000 >>>> or 840000 (gives us 4-8 years to develop this solution), it stays nice and >>>> neat with the halving interval. We can look at this like so: >>>> >>>> B - the current block number >>>> P - how many blocks behind current the coin burning block is. (630000, >>>> 840000, or otherwise.) >>>> >>>> Every time we mine a new block, we go to block (B-P), and check for >>>> stale coins. These coins get burnt up and pooled into block B's miner fees. >>>> This keeps the mining rewards up in the long term, people are less likely >>>> to stop mining due to too low fees. It also encourages people to keep >>>> moving their money around the enconomy instead of just hording and leaving >>>> it. >>>> >>> >> -------------- next part -------------- An HTML attachment was scrubbed... URL: From gsanders87 at gmail.com Tue Aug 22 19:26:30 2017 From: gsanders87 at gmail.com (Greg Sanders) Date: Tue, 22 Aug 2017 12:26:30 -0700 Subject: [bitcoin-dev] [BIP Proposal] Partially Signed Bitcoin Transaction (PSBT) format In-Reply-To: <5f67d70d-a432-7826-22df-4207580aa1d2@gmail.com> References: <5f67d70d-a432-7826-22df-4207580aa1d2@gmail.com> Message-ID: If 'x' is public, that makes it identifiable and privacy-losing across inputs. To avoid "re-use" I suppose you'd want to sign some message like `HMAC("ownership proof", H(A || x) )` instead. Otherwise any signature you make using `A` ends up being used as a proof you don't know the input(this seems like just details but to be more clear)... To reiterate: Sign `HMAC("ownership proof", H(A || x) )` using `A`. Public verifiers see `HMAC("ownership proof", some_random_hash_connected_to_A )` and the HWW that owns that input can recreate `some_random_hash_connected_to_A` by `H(A || x) )` On Mon, Aug 21, 2017 at 2:36 PM, Jochen Hoenicke wrote: > On 21.08.2017 20:12, Greg Sanders via bitcoin-dev wrote: > > To fix this I consulted with andytoshi and got something we think works > > for both cases: > > > > 1) When a signing device receives a partially signed transaction, all > > inputs must come with a ownership proof: > > - For the input at address A, a signature over H(A || x) using the key > > for A. 'x' is some private fixed key that only the signing device > > knows(most likely some privkey along some unique bip32 path). > > - For each input ownership proof, the HW wallet validates each signature > > over the hashed message, then attempts to "decode" the hash by applying > > its own 'x'. If the hash doesn't match, it cannot be its own input. > > - Sign for every input that is yours > > Interesting, basically a proof of non-ownership :), a proof that the > hardware wallet doesn't own the address. > > But shouldn't x be public, so that the device can verify the signature? > Can you expand on this, what is exactly signed with which key and how is > it checked? > > One also has to make sure that it's not possible to reuse signatures as > ownership proof that were made for a different purpose. > > Jochen > -------------- next part -------------- An HTML attachment was scrubbed... URL: From erik at q32.com Tue Aug 22 20:06:01 2017 From: erik at q32.com (Erik Aronesty) Date: Tue, 22 Aug 2017 16:06:01 -0400 Subject: [bitcoin-dev] UTXO growth scaling solution proposal In-Reply-To: References: <4c39bee6-f419-2e36-62a8-d38171b15558@aei.ca> Message-ID: > The initial message I replied to stated: Yes, 3 years is silly. But coin expiration and quantum resistance is something I've been thinking about for a while, so I tried to steer the conversation away from stealing old money for no reason ;). Plus I like the idea of making Bitcoin "2000 year proof". - I cannot imagine either SHA256 or any of our existing wallet formats surviving 200 years, if we expect both moores law and quantum computing to be a thing. I would expect the PoW to be rendered obsolete before the Bitcoin addresses. - A PoW change using Keccak and a flexible number of bits can be designed as a "future hard fork". That is: the existing POW can be automatically rendered obsolete... but only in the event that difficulty rises to the level of obsolescence. Then the code for a new algorithm with a flexible number of bits and a difficulty that can scale for thousands of years can then automatically kick in. - A new addresses format and signing protocols that use a flexible number of bits can be introduced. The maximum number of supported bits can be configurable, and trivially changed. These can be made immediately available but completely optional. - The POW difficulty can be used to inform the expiration of any addresses that can be compromised within 5 years assuming this power was somehow used to compromise them. Some mechanism for translating global hashpower to brute force attack power can be researched, and consesrvative estimates made. Right now, it's like "heat death of the universe" amount of time to crack with every machine on the planet. But hey... things change and 2000 years is a long time. This information can be used to inform the expiration and reclamation of old, compromised public addresses. - Planning a hard fork 100 to 1000 years out is a fun exercise On Tue, Aug 22, 2017 at 2:55 PM, Chris Riley wrote: > The initial message I replied to stated in part, "Okay so I quite like > this idea. If we start removing at height 630000 or 840000 (gives us 4-8 > years to develop this solution), it stays nice and neat with the halving > interval...." > > That is less than 3 years or less than 7 years away. Much sooner than it > is believed QC or Moore's law could impact bitcoin. Changing bitcoin so as > to require that early coins start getting "scavenged" at that date seems > unneeded and irresponsible. Besides, your ECDSA is only revealed when you > spend the coins which does provide some quantum resistance. Hal was just > an example of people putting their coins away expecting them to be there at > X years in the future, whether it is for himself or for his kids and wife. > > :-) > > > > On Tue, Aug 22, 2017 at 1:33 PM, Matthew Beton > wrote: > >> Very true, if Moore's law is still functional in 200 years, computers >> will be 2^100 times faster (possibly more if quantum computing becomes >> commonplace), and so old wallets may be easily cracked. >> >> We will need a way to force people to use newer, higher security wallets, >> and turning coins to mining rewards is better solution than them just being >> hacked. >> >> On Tue, 22 Aug 2017, 7:24 pm Thomas Guyot-Sionnest >> wrote: >> >>> In any case when Hal Finney do not wake up from his 200years >>> cryo-preservation (because unfortunately for him 200 years earlier they did >>> not know how to preserve a body well enough to resurrect it) he would find >>> that advance in computer technology made it trivial for anyone to steal his >>> coins using the long-obsolete secp256k1 ec curve (which was done long >>> before, as soon as it became profitable to crack down the huge stash of >>> coins stale in the early blocks) >>> >>> I just don't get that argument that you can't be "your own bank". The >>> only requirement coming from this would be to move your coins about once >>> every 10 years or so, which you should be able to do if you have your >>> private keys (you should!). You say it may be something to consider when >>> computer breakthroughs makes old outputs vulnerable, but I say it's not >>> "if" but "when" it happens, and by telling firsthand people that their >>> coins requires moving every once in a long while you ensure they won't do >>> stupid things or come back 50 years from now and complain their addresses >>> have been scavenged. >>> >>> -- >>> Thomas >>> >>> >>> On 22/08/17 10:29 AM, Erik Aronesty via bitcoin-dev wrote: >>> >>> I agree, it is only a good idea in the event of a quantum computing >>> threat to the security of Bitcoin. >>> >>> On Tue, Aug 22, 2017 at 9:45 AM, Chris Riley via bitcoin-dev < >>> bitcoin-dev at lists.linuxfoundation.org> wrote: >>> >>>> This seems to be drifting off into alt-coin discussion. The idea that >>>> we can change the rules and steal coins at a later date because they are >>>> "stale" or someone is "hoarding" is antithetical to one of the points of >>>> bitcoin in that you can no longer control your own money ("be your own >>>> bank") because someone can at a later date take your coins for some reason >>>> that is outside your control and solely based on some rationalization by a >>>> third party. Once the rule is established that there are valid reasons why >>>> someone should not have control of their own bitcoins, what other reasons >>>> will then be determined to be valid? >>>> >>>> I can imagine Hal Finney being revived (he was cryo-preserved at Alcor >>>> if you aren't aware) after 100 or 200 years expecting his coins to be there >>>> only to find out that his coins were deemed "stale" so were "reclaimed" (in >>>> the current doublespeak - e.g. stolen or confiscated). Or perhaps he >>>> locked some for his children and they are found to be "stale" before they >>>> are available. He said in March 2013, "I think they're safe enough" stored >>>> in a paper wallet. Perhaps any remaining coins are no longer "safe enough." >>>> >>>> Again, this seems (a) more about an alt-coin/bitcoin fork or (b) better >>>> in bitcoin-discuss at best vs bitcoin-dev. I've seen it discussed many >>>> times since 2010 and still do not agree with the rational that embracing >>>> allowing someone to steal someone else's coins for any reason is a useful >>>> change to bitcoin. >>>> >>>> >>>> >>>> >>>> On Tue, Aug 22, 2017 at 4:19 AM, Matthew Beton via bitcoin-dev < >>>> bitcoin-dev at lists.linuxfoundation.org> wrote: >>>> >>>>> Okay so I quite like this idea. If we start removing at height 630000 >>>>> or 840000 (gives us 4-8 years to develop this solution), it stays nice and >>>>> neat with the halving interval. We can look at this like so: >>>>> >>>>> B - the current block number >>>>> P - how many blocks behind current the coin burning block is. (630000, >>>>> 840000, or otherwise.) >>>>> >>>>> Every time we mine a new block, we go to block (B-P), and check for >>>>> stale coins. These coins get burnt up and pooled into block B's miner fees. >>>>> This keeps the mining rewards up in the long term, people are less likely >>>>> to stop mining due to too low fees. It also encourages people to keep >>>>> moving their money around the enconomy instead of just hording and leaving >>>>> it. >>>>> >>>> >>> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From mark at friedenbach.org Tue Aug 22 20:20:41 2017 From: mark at friedenbach.org (Mark Friedenbach) Date: Tue, 22 Aug 2017 13:20:41 -0700 Subject: [bitcoin-dev] UTXO growth scaling solution proposal In-Reply-To: References: <4c39bee6-f419-2e36-62a8-d38171b15558@aei.ca> Message-ID: <3E90F36F-A583-4B46-A6AF-2C78FE3F48B2@friedenbach.org> A fun exercise to be sure, but perhaps off topic for this list? > On Aug 22, 2017, at 1:06 PM, Erik Aronesty via bitcoin-dev wrote: > > > The initial message I replied to stated: > > Yes, 3 years is silly. But coin expiration and quantum resistance is something I've been thinking about for a while, so I tried to steer the conversation away from stealing old money for no reason ;). Plus I like the idea of making Bitcoin "2000 year proof". > > - I cannot imagine either SHA256 or any of our existing wallet formats surviving 200 years, if we expect both moores law and quantum computing to be a thing. I would expect the PoW to be rendered obsolete before the Bitcoin addresses. > > - A PoW change using Keccak and a flexible number of bits can be designed as a "future hard fork". That is: the existing POW can be automatically rendered obsolete... but only in the event that difficulty rises to the level of obsolescence. Then the code for a new algorithm with a flexible number of bits and a difficulty that can scale for thousands of years can then automatically kick in. > > - A new addresses format and signing protocols that use a flexible number of bits can be introduced. The maximum number of supported bits can be configurable, and trivially changed. These can be made immediately available but completely optional. > > - The POW difficulty can be used to inform the expiration of any addresses that can be compromised within 5 years assuming this power was somehow used to compromise them. Some mechanism for translating global hashpower to brute force attack power can be researched, and consesrvative estimates made. Right now, it's like "heat death of the universe" amount of time to crack with every machine on the planet. But hey... things change and 2000 years is a long time. This information can be used to inform the expiration and reclamation of old, compromised public addresses. > > - Planning a hard fork 100 to 1000 years out is a fun exercise > > > > >> On Tue, Aug 22, 2017 at 2:55 PM, Chris Riley wrote: >> The initial message I replied to stated in part, "Okay so I quite like this idea. If we start removing at height 630000 or 840000 (gives us 4-8 years to develop this solution), it stays nice and neat with the halving interval...." >> >> That is less than 3 years or less than 7 years away. Much sooner than it is believed QC or Moore's law could impact bitcoin. Changing bitcoin so as to require that early coins start getting "scavenged" at that date seems unneeded and irresponsible. Besides, your ECDSA is only revealed when you spend the coins which does provide some quantum resistance. Hal was just an example of people putting their coins away expecting them to be there at X years in the future, whether it is for himself or for his kids and wife. >> >> :-) >> >> >> >>> On Tue, Aug 22, 2017 at 1:33 PM, Matthew Beton wrote: >>> Very true, if Moore's law is still functional in 200 years, computers will be 2^100 times faster (possibly more if quantum computing becomes commonplace), and so old wallets may be easily cracked. >>> >>> We will need a way to force people to use newer, higher security wallets, and turning coins to mining rewards is better solution than them just being hacked. >>> >>> >>>> On Tue, 22 Aug 2017, 7:24 pm Thomas Guyot-Sionnest wrote: >>>> In any case when Hal Finney do not wake up from his 200years cryo-preservation (because unfortunately for him 200 years earlier they did not know how to preserve a body well enough to resurrect it) he would find that advance in computer technology made it trivial for anyone to steal his coins using the long-obsolete secp256k1 ec curve (which was done long before, as soon as it became profitable to crack down the huge stash of coins stale in the early blocks) >>>> >>>> I just don't get that argument that you can't be "your own bank". The only requirement coming from this would be to move your coins about once every 10 years or so, which you should be able to do if you have your private keys (you should!). You say it may be something to consider when computer breakthroughs makes old outputs vulnerable, but I say it's not "if" but "when" it happens, and by telling firsthand people that their coins requires moving every once in a long while you ensure they won't do stupid things or come back 50 years from now and complain their addresses have been scavenged. >>>> >>>> -- >>>> Thomas >>>> >>>> >>>>> On 22/08/17 10:29 AM, Erik Aronesty via bitcoin-dev wrote: >>>>> I agree, it is only a good idea in the event of a quantum computing threat to the security of Bitcoin. >>>>> >>>>>> On Tue, Aug 22, 2017 at 9:45 AM, Chris Riley via bitcoin-dev wrote: >>>>>> This seems to be drifting off into alt-coin discussion. The idea that we can change the rules and steal coins at a later date because they are "stale" or someone is "hoarding" is antithetical to one of the points of bitcoin in that you can no longer control your own money ("be your own bank") because someone can at a later date take your coins for some reason that is outside your control and solely based on some rationalization by a third party. Once the rule is established that there are valid reasons why someone should not have control of their own bitcoins, what other reasons will then be determined to be valid? >>>>>> >>>>>> I can imagine Hal Finney being revived (he was cryo-preserved at Alcor if you aren't aware) after 100 or 200 years expecting his coins to be there only to find out that his coins were deemed "stale" so were "reclaimed" (in the current doublespeak - e.g. stolen or confiscated). Or perhaps he locked some for his children and they are found to be "stale" before they are available. He said in March 2013, "I think they're safe enough" stored in a paper wallet. Perhaps any remaining coins are no longer "safe enough." >>>>>> >>>>>> Again, this seems (a) more about an alt-coin/bitcoin fork or (b) better in bitcoin-discuss at best vs bitcoin-dev. I've seen it discussed many times since 2010 and still do not agree with the rational that embracing allowing someone to steal someone else's coins for any reason is a useful change to bitcoin. >>>>>> >>>>>> >>>>>> >>>>>> >>>>>> On Tue, Aug 22, 2017 at 4:19 AM, Matthew Beton via bitcoin-dev wrote: >>>>>>> Okay so I quite like this idea. If we start removing at height 630000 or 840000 (gives us 4-8 years to develop this solution), it stays nice and neat with the halving interval. We can look at this like so: >>>>>>> >>>>>>> B - the current block number >>>>>>> P - how many blocks behind current the coin burning block is. (630000, 840000, or otherwise.) >>>>>>> >>>>>>> Every time we mine a new block, we go to block (B-P), and check for stale coins. These coins get burnt up and pooled into block B's miner fees. This keeps the mining rewards up in the long term, people are less likely to stop mining due to too low fees. It also encourages people to keep moving their money around the enconomy instead of just hording and leaving it. >>>> >> > > _______________________________________________ > bitcoin-dev mailing list > bitcoin-dev at lists.linuxfoundation.org > https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev -------------- next part -------------- An HTML attachment was scrubbed... URL: From daniele.pinna at gmail.com Tue Aug 22 22:17:19 2017 From: daniele.pinna at gmail.com (Daniele Pinna) Date: Wed, 23 Aug 2017 00:17:19 +0200 Subject: [bitcoin-dev] UTXO growth scaling solution proposal Message-ID: Also.... how is this not a tax on coin holders? By forcing people to move coins around you would be chipping away at their wealth in the form of extorted TX fees. -------------- next part -------------- An HTML attachment was scrubbed... URL: From rodney.morris at gmail.com Tue Aug 22 22:58:54 2017 From: rodney.morris at gmail.com (Rodney Morris) Date: Wed, 23 Aug 2017 08:58:54 +1000 Subject: [bitcoin-dev] UTXO growth scaling solution proposal Message-ID: Thomas et.al. So, in your minds, anyone who locked up coins using CLTV for their child to receive on their 21st birthday, for the sake of argument, has effectively forfeit those coins after the fact? You are going to force anyone who took coins offline (cryptosteel, paper, doesn't matter) to bring those coins back online, with the inherent security risks? In my mind, the only sane way to even begin discussing an approach implementing such a thing - where coins "expire" after X years - would be to give the entire ecosystem X*N years warning, where N > 1.5. I'd also suggest X would need to be closer to the life span of a human than zero. Mind you, I'd suggest this "feature" would need to be coded and deployed as a future-hard-fork X*N years ahead of time. A-la Satoshi's blog post regarding increasing block size limit, a good enough approximation would be to add a block height check to the code that approximates X*N years, based on 10 minute blocks. The transparency around such a change would need to be radical and absolute. I'd also suggest that, similar to CLTV, it only makes sense to discuss creating a "never expire" transaction output, if such a feature were being seriously considered. If you think discussions around a block size increase were difficult, then we'll need a new word to describe the challenges and vitriol that would arise in arguments that will follow this discussion should it be seriously proposed, IMHO. I also don't think it's reasonable to conflate the discussion herein with discussion about what to do when ECC or SHA256 is broken. The weakening/breaking of ECC poses a real risk to the stability of Bitcoin - the possible release of Satoshi's stash being the most obvious example - and what to do about that will require serious consideration when the time comes. Even if the end result is the same - that coins older than "X" will be invalidated - everything else important about the scenarios are different as far as I can see. Rodney > > > Date: Tue, 22 Aug 2017 13:24:05 -0400 > From: Thomas Guyot-Sionnest > To: Erik Aronesty , Bitcoin Protocol Discussion > , Chris Riley > > Cc: Matthew Beton > Subject: Re: [bitcoin-dev] UTXO growth scaling solution proposal > Message-ID: <4c39bee6-f419-2e36-62a8-d38171b15558 at aei.ca> > Content-Type: text/plain; charset="windows-1252" > > In any case when Hal Finney do not wake up from his 200years > cryo-preservation (because unfortunately for him 200 years earlier they > did not know how to preserve a body well enough to resurrect it) he > would find that advance in computer technology made it trivial for > anyone to steal his coins using the long-obsolete secp256k1 ec curve > (which was done long before, as soon as it became profitable to crack > down the huge stash of coins stale in the early blocks) > > I just don't get that argument that you can't be "your own bank". The > only requirement coming from this would be to move your coins about once > every 10 years or so, which you should be able to do if you have your > private keys (you should!). You say it may be something to consider when > computer breakthroughs makes old outputs vulnerable, but I say it's not > "if" but "when" it happens, and by telling firsthand people that their > coins requires moving every once in a long while you ensure they won't > do stupid things or come back 50 years from now and complain their > addresses have been scavenged. > > -- > Thomas > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From dermoth at aei.ca Tue Aug 22 23:27:30 2017 From: dermoth at aei.ca (Thomas Guyot-Sionnest) Date: Tue, 22 Aug 2017 19:27:30 -0400 Subject: [bitcoin-dev] UTXO growth scaling solution proposal In-Reply-To: References: Message-ID: On 22/08/17 06:17 PM, Daniele Pinna via bitcoin-dev wrote: > Also.... how is this not a tax on coin holders? By forcing people to > move coins around you would be chipping away at their wealth in the > form of extorted TX fees. > As if the fee for one tx per decade (or more if we'd like) matters, plus it could be very low priority. In fact we could re-allow free transactions based on old priority rules (oldest outputs gets higher priority... I would suggest considering reduction in utxo size as well but that's another topic). Actually, to ensure miners allow these transaction one rule could be that the block must contain free transactions on old utxo's ("old" TBD) to reclaim from the scavenged pool... One side effect is that mining empty blocks before previous block TX can be validated would reduce the reward. I'd love to find clever approach where we could somehow make a verifiable block check that old tx refresh are included... I haven't put much thoughts into it yet but if there was a way a two-step transaction where 1. a fee is paid to register an UTXO refresh (miners would be encouraged to accept it and increase their immediate revenue), and 2. the fee must be returned from the pool on a later block. The idea is to allow free scavenging of own addresses while discouraging miners from refusing free transactions so they could eventually reclaim the coins. I can't think of a way that limits the burden on consensus rules... -- Thomas From dermoth at aei.ca Tue Aug 22 23:29:41 2017 From: dermoth at aei.ca (Thomas Guyot-Sionnest) Date: Tue, 22 Aug 2017 19:29:41 -0400 Subject: [bitcoin-dev] UTXO growth scaling solution proposal In-Reply-To: References: Message-ID: I'm just getting the proposal out... if we decide to go forward (pretty huge "if" right now) whenever it kicks in after 15, 50 or 100 years should be decided as early as possible. Are CheckLockTimeVerify transactions accepted yet? I thought most special transactions were only accepted on Testnet... In any case we should be able to scan the blockchain and look for any such transaction. And I hate to make this more complex, but maybe re-issuing the tx from coinbase could be an option? -- Thomas On 22/08/17 06:58 PM, Rodney Morris via bitcoin-dev wrote: > Thomas et.al . > > So, in your minds, anyone who locked up coins using CLTV for their > child to receive on their 21st birthday, for the sake of argument, has > effectively forfeit those coins after the fact? You are going to > force anyone who took coins offline (cryptosteel, paper, doesn't > matter) to bring those coins back online, with the inherent security > risks? > > In my mind, the only sane way to even begin discussing an approach > implementing such a thing - where coins "expire" after X years - would > be to give the entire ecosystem X*N years warning, where N > 1.5. I'd > also suggest X would need to be closer to the life span of a human > than zero. Mind you, I'd suggest this "feature" would need to be > coded and deployed as a future-hard-fork X*N years ahead of time. > A-la Satoshi's blog post regarding increasing block size limit, a good > enough approximation would be to add a block height check to the code > that approximates X*N years, based on 10 minute blocks. The > transparency around such a change would need to be radical and absolute. > > I'd also suggest that, similar to CLTV, it only makes sense to discuss > creating a "never expire" transaction output, if such a feature were > being seriously considered. > > If you think discussions around a block size increase were difficult, > then we'll need a new word to describe the challenges and vitriol that > would arise in arguments that will follow this discussion should it be > seriously proposed, IMHO. > > I also don't think it's reasonable to conflate the discussion herein > with discussion about what to do when ECC or SHA256 is broken. The > weakening/breaking of ECC poses a real risk to the stability of > Bitcoin - the possible release of Satoshi's stash being the most > obvious example - and what to do about that will require serious > consideration when the time comes. Even if the end result is the same > - that coins older than "X" will be invalidated - everything else > important about the scenarios are different as far as I can see. > > Rodney > -------------- next part -------------- An HTML attachment was scrubbed... URL: From mark at friedenbach.org Wed Aug 23 03:26:19 2017 From: mark at friedenbach.org (Mark Friedenbach) Date: Tue, 22 Aug 2017 20:26:19 -0700 Subject: [bitcoin-dev] UTXO growth scaling solution proposal In-Reply-To: References: Message-ID: <02ECA1E2-B113-4668-984A-70445052C8B9@friedenbach.org> Lock time transactions have been valid for over a year now I believe. In any case we can't scan the block chain for usage patterns in UTXOs because P2SH puts the script in the signature on spend. > On Aug 22, 2017, at 4:29 PM, Thomas Guyot-Sionnest via bitcoin-dev wrote: > > I'm just getting the proposal out... if we decide to go forward (pretty huge "if" right now) whenever it kicks in after 15, 50 or 100 years should be decided as early as possible. > > Are CheckLockTimeVerify transactions accepted yet? I thought most special transactions were only accepted on Testnet... In any case we should be able to scan the blockchain and look for any such transaction. And I hate to make this more complex, but maybe re-issuing the tx from coinbase could be an option? > > -- > Thomas > >> On 22/08/17 06:58 PM, Rodney Morris via bitcoin-dev wrote: >> Thomas et.al. >> >> So, in your minds, anyone who locked up coins using CLTV for their child to receive on their 21st birthday, for the sake of argument, has effectively forfeit those coins after the fact? You are going to force anyone who took coins offline (cryptosteel, paper, doesn't matter) to bring those coins back online, with the inherent security risks? >> >> In my mind, the only sane way to even begin discussing an approach implementing such a thing - where coins "expire" after X years - would be to give the entire ecosystem X*N years warning, where N > 1.5. I'd also suggest X would need to be closer to the life span of a human than zero. Mind you, I'd suggest this "feature" would need to be coded and deployed as a future-hard-fork X*N years ahead of time. A-la Satoshi's blog post regarding increasing block size limit, a good enough approximation would be to add a block height check to the code that approximates X*N years, based on 10 minute blocks. The transparency around such a change would need to be radical and absolute. >> >> I'd also suggest that, similar to CLTV, it only makes sense to discuss creating a "never expire" transaction output, if such a feature were being seriously considered. >> >> If you think discussions around a block size increase were difficult, then we'll need a new word to describe the challenges and vitriol that would arise in arguments that will follow this discussion should it be seriously proposed, IMHO. >> >> I also don't think it's reasonable to conflate the discussion herein with discussion about what to do when ECC or SHA256 is broken. The weakening/breaking of ECC poses a real risk to the stability of Bitcoin - the possible release of Satoshi's stash being the most obvious example - and what to do about that will require serious consideration when the time comes. Even if the end result is the same - that coins older than "X" will be invalidated - everything else important about the scenarios are different as far as I can see. >> >> Rodney >> > > _______________________________________________ > bitcoin-dev mailing list > bitcoin-dev at lists.linuxfoundation.org > https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev -------------- next part -------------- An HTML attachment was scrubbed... URL: From tshachaf at gmail.com Sat Aug 26 19:21:16 2017 From: tshachaf at gmail.com (Adam Tamir Shem-Tov) Date: Sat, 26 Aug 2017 22:21:16 +0300 Subject: [bitcoin-dev] Solving the Scalability Problem on Bitcoin Message-ID: Solving the Scalability issue for bitcoin
I have this idea to solve the scalability problem I wish to make public. If I am wrong I hope to be corrected, and if I am right we will all gain by it.
Currently each block is being hashed, and in its contents are the hash of the block preceding it, this goes back to the genesis block.
What if we decide, for example, we decide to combine and prune the blockchain in its entirety every 999 blocks to one block (Genesis block not included in count).
How would this work?: Once block 1000 has been created, the network would be waiting for a special "pruned block", and until this block was created and verified, block 1001 would not be accepted by any nodes. This pruned block would prune everything from block 2 to block 1000, leaving only the genesis block. Blocks 2 through 1000, would be calculated, to create a summed up transaction of all transactions which occurred in these 999 blocks.
And its hash pointer would be the Genesis block. This block would now be verified by the full nodes, which if accepted would then be willing to accept a new block (block 1001, not including the pruned block in the count).
The new block 1001, would use as its hash pointer the pruned block as its reference. And the count would begin again to the next 1000. The next pruned block would be created, its hash pointer will be referenced to the Genesis Block. And so on..
In this way the ledger will always be a maximum of 1000 blocks.
A bit more detail:
All the outputs needed to verify early transactions will all be in the pruning block. The only information you lose are of the intermediate transactions, not the final ones the community has already accepted. For example:
A = 2.3 BTC, B=0, C=1.4. (Block 1)
If A sends 2.3 BTC to B. (Block 2)
And then B sends 1.5 to C. (Block 3)
The pruning block will report:
B = 0.8 and C=2.9.
The rest of the information you lose, is irrelevant. No one needs to know that A even existed since it is now empty, nor do they need to know how much B and C had previously, only what they have now.
Note: The Transaction Chain would also need to be rewritten, to delete all intermediate transactions, it will show as though transactions occurred from the Genesis block directly to the pruned block, as though nothing ever existed in between.

You can keep the old blocks on your drive for 10 more blocks or so, just in case a longer block chain is found, but other than that the information it holds is useless, since it has all been agreed upon. And the pruning block holds all up to date account balances, so cheating is impossible.
Granted this pruning block can get extremely large in the future, it will not be the regular size of the other blocks. For example if every account has only 1 satoshi in it, which is the minimum, then the amount of accounts will be at its maximum. Considering a transaction is about 256bytes. That would mean the pruning block would be approximately 500PB, which is 500,000 Terra-bytes. That is a theoretical scenario, which is not likely to occur. (256bytes*23M BTC*100M (satoshis in 1 BTC))
A scenario which could be solved by creating a minimum transaction fee of 100 satoshis, which would insure that even in the most unlikely scenario, at worst the pruning block would be 5PB in size.
Also, this pruning block does not even need to be downloaded, it could be created by already existing information, each full node by itself, by
1) combining and pruning all previous blocks
2) using the genesis block as its hash pointer
3) using a predefined random number "2", which will be used by all. A random number which is normally added to a block to ensure the block's hashrate difficulty, is not needed in this case, since all information can be verified by each node by itself through pruning.
4) Any other information which is needed for the SHA256 hash, for example a timestamp could be copied off the last block in the block chain.
These steps will ensure each full node, will get the exact hash code as the others have gotten for this pruning block.
And as I previously stated the next block will use this hash code as its hash reference.
By creating a system like this, the pruning block does not have to be created last minute, but gradually over time, every time a new block comes in, and only when the last block arrives (block 1000), will it be finalized, and hashed.
And since this block will always be second, it should go by the name "Exodus Block".
Adam Shem-Tov -------------- next part -------------- An HTML attachment was scrubbed... URL: From tshachaf at gmail.com Sat Aug 26 21:01:56 2017 From: tshachaf at gmail.com (Adam Tamir Shem-Tov) Date: Sun, 27 Aug 2017 00:01:56 +0300 Subject: [bitcoin-dev] Solving the Scalability Problem Part II - Adam Shem-Tov Message-ID: Solving the Scalability Problem Part II --------------------------------------------------------------------
In the previous post I showed a way to minimize the blocks on the block chain, to lower the amount of space it takes on the hard drive, without losing any relevant information. I added a note, saying that the transaction chain needs to be rewritten, but I did not give much detail to it.
Here is how that would work:
The Genesis Account: -----------------------------------------
The problem with changing the transaction and block chain, is that it cannot be done without knowing the private key of the sender of the of the funds for each account. There is however a way to circumvent that problem. That is to create a special account called the ?Genesis Account?, this account?s Private Key and Public Key will be available to everyone.
But this account will not be able to send or receive any funds in a normal block, it will be blocked--blacklisted. So no one can intentionally use it. The only time this account will be used is in the pruning block, a.k.a Exodus Block.
When creating the new pruned block chain and transaction chain, all the funds that are now in accounts must be legitimate, and it would be difficult to legitimize them unless they were sent from a legitimate account, with a public key, and a private key which can be verified. That is where the Genesis account comes in. All funds in the Exodus Block will show as though they originated and were sent from the Genesis Account using its privatekey to generate each transaction.
The funds which are sent, must match exactly the funds existing in the most updated ledger in block 1000 (the last block as stated in my previous post).
In this way the Exodus Block can be verified, and the Genesis Account cannot give free money to anyway, because if someone tried to, it would fail verification.

Now the next problem is that the number of Bitcoins keeps expanding and so the funds in the Genesis Account need to expand as well. That can be done by showing as though this account is the account which is mining the coins, and it will be the only account in the Exodus Block which ?mines? the coins, and receives the mining bonus. In the Exodus Block all coins mined by the real miners will show as though they were mined by Genesis and sent to the miners through a regular transaction.
Adam Shem-Tov -------------- next part -------------- An HTML attachment was scrubbed... URL: From dermoth at aei.ca Sat Aug 26 21:31:11 2017 From: dermoth at aei.ca (Thomas Guyot-Sionnest) Date: Sat, 26 Aug 2017 17:31:11 -0400 Subject: [bitcoin-dev] Solving the Scalability Problem on Bitcoin In-Reply-To: References: Message-ID: Pruning is already implemented in the nodes... Once enabled only unspent inputs and most recent blocks are kept. IIRC there was also a proposal to include UTXO in some blocks for SPV clients to use, but that would be additional to the blockchain data. Implementing your solution is impossible because there is no way to determine authenticity of the blockchain mid way. The proof that a block hash leads to the genesis block is also a proof of all the work that's been spent on it (the years of hashing). At the very least we'd have to keep all blocks until a hard-coded checkpoint in the code, which also means that as nodes upgrades and prune more blocks older nodes will have difficulty syncing the blockchain. Finally it's not just the addresses and balance you need to save, but also each unspent output block number, tx position and script that are required for validation on input. That's a lot of data that you're suggesting to save every 1000 blocks (and why 1000?), and as said earlier it doesn't even guarantee you can drop older blocks. I'm not even going into the details of making it work (hard fork, large block sync/verification issues, possible attack vectors opened by this...) What is wrong with the current implementation of node pruning that you are trying to solve? -- Thomas On 26/08/17 03:21 PM, Adam Tamir Shem-Tov via bitcoin-dev wrote: > > Solving the Scalability issue for bitcoin
> > I have this idea to solve the scalability problem I wish to make public. > > If I am wrong I hope to be corrected, and if I am right we will all > gain by it.
> > Currently each block is being hashed, and in its contents are the hash > of the block preceding it, this goes back to the genesis block. > >
> > What if we decide, for example, we decide to combine and prune the > blockchain in its entirety every 999 blocks to one block (Genesis > block not included in count). > >
> > How would this work?: Once block 1000 has been created, the network > would be waiting for a special "pruned block", and until this block > was created and verified, block 1001 would not be accepted by any nodes. > > This pruned block would prune everything from block 2 to block 1000, > leaving only the genesis block. Blocks 2 through 1000, would be > calculated, to create a summed up transaction of all transactions > which occurred in these 999 blocks. > >
> > And its hash pointer would be the Genesis block. > > This block would now be verified by the full nodes, which if accepted > would then be willing to accept a new block (block 1001, not including > the pruned block in the count). > >
> > The new block 1001, would use as its hash pointer the pruned block as > its reference. And the count would begin again to the next 1000. The > next pruned block would be created, its hash pointer will be > referenced to the Genesis Block. And so on.. > >
> > In this way the ledger will always be a maximum of 1000 blocks. > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From dermoth at aei.ca Sat Aug 26 21:41:34 2017 From: dermoth at aei.ca (Thomas Guyot-Sionnest) Date: Sat, 26 Aug 2017 17:41:34 -0400 Subject: [bitcoin-dev] Solving the Scalability Problem Part II - Adam Shem-Tov In-Reply-To: References: Message-ID: <57de4421-0162-67c5-8905-10f6b477644c@aei.ca> I don't think you fully understand the way bitcoin works. There are no "accounts" and no need to know the private key to change transactions in the chain. What you need is to keep track of all unspent outputs (block number, index, value and script/witness) so that they can be verified once a transaction refers to it. Everything you suggest about moving those funds to a "genesis account" is nonsense and cannot work. -- Thomas On 26/08/17 05:01 PM, Adam Tamir Shem-Tov via bitcoin-dev wrote: > > Solving the Scalability Problem Part II > -------------------------------------------------------------------- >
> In the previous post I showed a way to minimize the blocks on the > block chain, to lower the amount of space it takes on the hard drive, > without losing any relevant information. > I added a note, saying that the transaction chain needs to be > rewritten, but I did not give much detail to it.
> Here is how that would work:
> The Genesis Account: > -----------------------------------------
> The problem with changing the transaction and block chain, is that it > cannot be done without knowing the private key of the sender of the of > the funds for each account. There is however a way to circumvent that > problem. That is to create a special account called the ?Genesis > Account?, this account?s Private Key and Public Key will be available > to everyone.
> But this account will not be able to send or receive any funds in a > normal block, it will be blocked--blacklisted. So no one can > intentionally use it. The only time this account will be used is in > the pruning block, a.k.a Exodus Block.
> When creating the new pruned block chain and transaction chain, all > the funds that are now in accounts must be legitimate, and it would be > difficult to legitimize them unless they were sent from a legitimate > account, with a public key, and a private key which can be verified. > That is where the Genesis account comes in. All funds in the Exodus > Block will show as though they originated and were sent from the > Genesis Account using its privatekey to generate each transaction.
> The funds which are sent, must match exactly the funds existing in the > most updated ledger in block 1000 (the last block as stated in my > previous post).
> In this way the Exodus Block can be verified, and the Genesis Account > cannot give free money to anyway, because if someone tried to, it > would fail verification.
> >
> Now the next problem is that the number of Bitcoins keeps expanding > and so the funds in the Genesis Account need to expand as well. That > can be done by showing as though this account is the account which is > mining the coins, and it will be the only account in the Exodus Block > which ?mines? the coins, and receives the mining bonus. In the Exodus > Block all coins mined by the real miners will show as though they were > mined by Genesis and sent to the miners through a regular transaction. > >
> > Adam Shem-Tov > -------------- next part -------------- An HTML attachment was scrubbed... URL: From criley at gmail.com Sat Aug 26 21:42:16 2017 From: criley at gmail.com (Christian Riley) Date: Sat, 26 Aug 2017 17:42:16 -0400 Subject: [bitcoin-dev] Solving the Scalability Problem Part II - Adam Shem-Tov In-Reply-To: References: Message-ID: There have been a number of similar (identical?) proposals over the years, some were discussed in these threads: https://bitcointalk.org/index.php?topic=56226.0 https://bitcointalk.org/index.php?topic=505.0 https://bitcointalk.org/index.php?topic=473.0 https://bitcointalk.org/index.php?topic=52859.0 https://bitcointalk.org/index.php?topic=12376.0 https://bitcointalk.org/index.php?topic=74559.15 > On Aug 26, 2017, at 5:01 PM, Adam Tamir Shem-Tov via bitcoin-dev wrote: > > Solving the Scalability Problem Part II > -------------------------------------------------------------------- >
> In the previous post I showed a way to minimize the blocks on the block chain, to lower the amount of space it takes on the hard drive, without losing any relevant information. > I added a note, saying that the transaction chain needs to be rewritten, but I did not give much detail to it.
> Here is how that would work:
> The Genesis Account: > -----------------------------------------
> The problem with changing the transaction and block chain, is that it cannot be done without knowing the private key of the sender of the of the funds for each account. There is however a way to circumvent that problem. That is to create a special account called the ?Genesis Account?, this account?s Private Key and Public Key will be available to everyone.
> But this account will not be able to send or receive any funds in a normal block, it will be blocked--blacklisted. So no one can intentionally use it. The only time this account will be used is in the pruning block, a.k.a Exodus Block.
> When creating the new pruned block chain and transaction chain, all the funds that are now in accounts must be legitimate, and it would be difficult to legitimize them unless they were sent from a legitimate account, with a public key, and a private key which can be verified. That is where the Genesis account comes in. All funds in the Exodus Block will show as though they originated and were sent from the Genesis Account using its privatekey to generate each transaction.
> The funds which are sent, must match exactly the funds existing in the most updated ledger in block 1000 (the last block as stated in my previous post).
> In this way the Exodus Block can be verified, and the Genesis Account cannot give free money to anyway, because if someone tried to, it would fail verification.
>
> Now the next problem is that the number of Bitcoins keeps expanding and so the funds in the Genesis Account need to expand as well. That can be done by showing as though this account is the account which is mining the coins, and it will be the only account in the Exodus Block which ?mines? the coins, and receives the mining bonus. In the Exodus Block all coins mined by the real miners will show as though they were mined by Genesis and sent to the miners through a regular transaction. >
> Adam Shem-Tov > > _______________________________________________ > bitcoin-dev mailing list > bitcoin-dev at lists.linuxfoundation.org > https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev -------------- next part -------------- An HTML attachment was scrubbed... URL: From tshachaf at gmail.com Sat Aug 26 22:26:15 2017 From: tshachaf at gmail.com (Adam Tamir Shem-Tov) Date: Sun, 27 Aug 2017 01:26:15 +0300 Subject: [bitcoin-dev] Solving the Scalability Problem Part II - Adam Shem-Tov In-Reply-To: References: Message-ID: Thank You Christian for your response. https://bitcointalk.org/index.php?topic=473.0 : I dont see the relevance. https://bitcointalk.org/index.php?topic=52859.0 : This idea does not seem to talking about trimming the full node. Trimming the full node is the key, the full node is what keeps us secure from hackers. If it can be trimmed without losing security, that would be good, that is what I am proposing. https://bitcointalk.org/index.php?topic=12376.0 : Same answer as 505.0. https://bitcointalk.org/index.php?topic=74559.15 : I think his proposal is similar to mine, unfortunately for us his predictions were way off. He was trying to fix this problem while believing that in the year 2020 the blockchain would be 4GB!!! It is not his fault, his prediction was in 2011. But you can see, by his prediction, which was rational at the time, was way off. And it stresses my point, we need to fix this now. Too bad, no one took him seriously back then, when the block chain i was 1GB. *https://bitcointalk.org/index.php?topic=56226.0 *: Another guy with a valid point, who was first acknowledged and then apparently ignored. . To summarize, this problem was brought up about 6 years ago, when the blockchain was 1GB in size, Now it is about 140GB in size. I think it is about time we stop ignoring this problem, and realize something needs to change, or else the only full-nodes you will have will be with private multi-million dollar companies, because no private citizen will have the storage space to keep it. That would make bitcoin the worst decentralized or uncentralized system in history. On 27 August 2017 at 00:42, Christian Riley wrote: > There have been a number of similar (identical?) proposals over the years, > some were discussed in these threads: > https://bitcointalk.org/index.php?topic=56226.0 > https://bitcointalk.org/index.php?topic=505.0 > https://bitcointalk.org/index.php?topic=473.0 > https://bitcointalk.org/index.php?topic=52859.0 > https://bitcointalk.org/index.php?topic=12376.0 > https://bitcointalk.org/index.php?topic=74559.15 > > > On Aug 26, 2017, at 5:01 PM, Adam Tamir Shem-Tov via bitcoin-dev < > bitcoin-dev at lists.linuxfoundation.org> wrote: > > Solving the Scalability Problem Part II > -------------------------------------------------------------------- >
> In the previous post I showed a way to minimize the blocks on the block > chain, to lower the amount of space it takes on the hard drive, without > losing any relevant information. > I added a note, saying that the transaction chain needs to be rewritten, > but I did not give much detail to it.
> Here is how that would work:
> The Genesis Account: > -----------------------------------------
> The problem with changing the transaction and block chain, is that it > cannot be done without knowing the private key of the sender of the of the > funds for each account. There is however a way to circumvent that problem. > That is to create a special account called the ?Genesis Account?, this > account?s Private Key and Public Key will be available to everyone.
> But this account will not be able to send or receive any funds in a normal > block, it will be blocked--blacklisted. So no one can intentionally use it. > The only time this account will be used is in the pruning block, a.k.a > Exodus Block.
> When creating the new pruned block chain and transaction chain, all the > funds that are now in accounts must be legitimate, and it would be > difficult to legitimize them unless they were sent from a legitimate > account, with a public key, and a private key which can be verified. That > is where the Genesis account comes in. All funds in the Exodus Block will > show as though they originated and were sent from the Genesis Account using > its privatekey to generate each transaction.
> The funds which are sent, must match exactly the funds existing in the > most updated ledger in block 1000 (the last block as stated in my previous > post).
> In this way the Exodus Block can be verified, and the Genesis Account > cannot give free money to anyway, because if someone tried to, it would > fail verification.
> >
> Now the next problem is that the number of Bitcoins keeps expanding and so > the funds in the Genesis Account need to expand as well. That can be done > by showing as though this account is the account which is mining the coins, > and it will be the only account in the Exodus Block which ?mines? the > coins, and receives the mining bonus. In the Exodus Block all coins mined > by the real miners will show as though they were mined by Genesis and sent > to the miners through a regular transaction. > >
> > Adam Shem-Tov > > > _______________________________________________ > bitcoin-dev mailing list > bitcoin-dev at lists.linuxfoundation.org > https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From tshachaf at gmail.com Sat Aug 26 22:32:17 2017 From: tshachaf at gmail.com (Adam Tamir Shem-Tov) Date: Sun, 27 Aug 2017 01:32:17 +0300 Subject: [bitcoin-dev] Solving the Scalability Problem on Bitcoin In-Reply-To: References: Message-ID: Thank you Thomas for your response. 1) Implement solution is impossible... I have given a solution in part II. By adding a Genesis Account which will be the new sender. 2)Keeping older blocks: Yes as I said 10 older blocks should be kept, that should suffice. I am not locked on that number, if you think there is a reason to keep more than that, it is open to debate. 3) Why 1000? To be honest, that number came off the top of my head. These are minor details, the concept must first be accepted, then we can work on the minor details. 4)Finally it's not just the addresses and balance you need to save... I think the Idea of the Genesis Account, solves this issue. 5) The problem with node pruning is that it is not standardized, and for a new node to enter the network and to verify the data, it needs to download all data and prune it by itself. This will drastically lower the information needed by the full nodes by getting rid of the junk. Currently we are around 140GB, that number is getting bigger exponentially, by the number of users and transactions created. It could reach a Terrabyte sooner than expected, we need to act now. On your second email: When I say account: I mean private-public key. The way bitcoin works, as I understand it, is that the funds are verified by showing that they have an origin, this "origin" needs to provide a signature, otherwise the transaction won't be accepted. If I am proposing to remove all intermediate origins, then the funds become untraceable and hence unverifiable. To fix that, a new transaction needs to replace old ones. A simplified version: If there was a transaction chain A->B->C->D, and I wish to show only A->D, only a transaction like that never actually occurred, it would be impossible to say that it did without having A's private key, in order to sign this transaction. In order to create this transaction, I need A's private key. And if I wish this to be publicly implemented I need this key to be public, so that any node creating this Exodus Block can sign with it. Hence the Genesis Account. And yes, it is not really an account. On 27 August 2017 at 00:31, Thomas Guyot-Sionnest wrote: > Pruning is already implemented in the nodes... Once enabled only unspent > inputs and most recent blocks are kept. IIRC there was also a proposal to > include UTXO in some blocks for SPV clients to use, but that would be > additional to the blockchain data. > > Implementing your solution is impossible because there is no way to > determine authenticity of the blockchain mid way. The proof that a block > hash leads to the genesis block is also a proof of all the work that's been > spent on it (the years of hashing). At the very least we'd have to keep all > blocks until a hard-coded checkpoint in the code, which also means that as > nodes upgrades and prune more blocks older nodes will have difficulty > syncing the blockchain. > > Finally it's not just the addresses and balance you need to save, but also > each unspent output block number, tx position and script that are required > for validation on input. That's a lot of data that you're suggesting to > save every 1000 blocks (and why 1000?), and as said earlier it doesn't even > guarantee you can drop older blocks. I'm not even going into the details of > making it work (hard fork, large block sync/verification issues, possible > attack vectors opened by this...) > > What is wrong with the current implementation of node pruning that you are > trying to solve? > > -- > Thomas > > On 26/08/17 03:21 PM, Adam Tamir Shem-Tov via bitcoin-dev wrote: > > Solving the Scalability issue for bitcoin
> > I have this idea to solve the scalability problem I wish to make public. > > If I am wrong I hope to be corrected, and if I am right we will all gain > by it.
> > Currently each block is being hashed, and in its contents are the hash of > the block preceding it, this goes back to the genesis block. > >
> > What if we decide, for example, we decide to combine and prune the > blockchain in its entirety every 999 blocks to one block (Genesis block not > included in count). > >
> > How would this work?: Once block 1000 has been created, the network would > be waiting for a special "pruned block", and until this block was created > and verified, block 1001 would not be accepted by any nodes. > > This pruned block would prune everything from block 2 to block 1000, > leaving only the genesis block. Blocks 2 through 1000, would be calculated, > to create a summed up transaction of all transactions which occurred in > these 999 blocks. > >
> > And its hash pointer would be the Genesis block. > > This block would now be verified by the full nodes, which if accepted > would then be willing to accept a new block (block 1001, not including the > pruned block in the count). > >
> > The new block 1001, would use as its hash pointer the pruned block as its > reference. And the count would begin again to the next 1000. The next > pruned block would be created, its hash pointer will be referenced to the > Genesis Block. And so on.. > >
> > In this way the ledger will always be a maximum of 1000 blocks. > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From a at colourful.land Sun Aug 27 00:27:49 2017 From: a at colourful.land (Weiwu) Date: Sun, 27 Aug 2017 10:27:49 +1000 (AEST) Subject: [bitcoin-dev] Solving the Scalability Problem on Bitcoin In-Reply-To: References: Message-ID: On Sat, 26 Aug 2017, Adam Tamir Shem-Tov via bitcoin-dev wrote: > For example: > > A = 2.3 BTC, B=0, C=1.4. (Block 1) > > If A sends 2.3 BTC to B. (Block 2) > > And then B sends 1.5 to C. (Block 3) > > The pruning block will report: > > B = 0.8 and C=2.9. You effecitvely want these two transactions: A -(2.30)-> B; B -(1.5)-> C; To be shorten to one transaction: A -(0.8)-> B -(1.5)-> C; For that to work a lot of changes has to be done to Bitcoin. For simplicity of the discussion I'll assume all transactions are standard transactions. First, a block has to refer to the hash of the "balance sheet" (with nonce), not the hash of the previous block. This way, a previous block can be replaced with a smaller one without affecting the hash reference. To add problem to this significant change, Bitcoin uses UTXO table instead of "balance sheet". The difference is that UTXO is indexed by transaction ID while a balance sheet is indexed by owner's public keys. The shortening you suggested wouldn't affect the balance sheet but would totally replace UTXOs for B and C, and probably even A, if A has some changes left. Second, Alice has to place a new signature on the shortened transaction. The design challenge is how do we motivate A to do so, since A needs to do it after "B->C", at which time Alice's business is done and her wallet offline. Luckily, all bitcoins come from miners. Imagine A gets her money from A', and all the way back, the originating A" must be a miner. We just need to design a different reward mechanism, where miners are not only rewarded by finding blocks, but also by shortening transactions after his expenses. Whatever new reward mechanism it may be, it will interfer with block hash reference discussed in the previous paragraph. Third, hash references are stablized by work. This is necessary, otherwise a smaller block intended to replace a long one will not be forced to maintain the same balance sheet. However, because work is done on blocks, shortening can only happen within one block. Normally, Bob who receives a transaction in a block, will not spend it to Carol in the same block, because he wants 6 confirmations before being sure, therefore, there will be little opportunity of shortening in one block. You mentioned the idea of shortening between 1000 blocks - that surely give a lot of opportunities to shorten a large directed transaction graph, but you would abandon the proof of work in those 999 blocks in between. There are three major design issue that needs to be worked out, but almost all unique aspects of Bitcoin will be affected. Just to name a few: - wallets need to be aware that the UTXO in it may change to some other UTXO with the same sum value. - nLockTime transactions are affected. Such transactions timed for near future probably can stay by ruling that shortening can only happen after a year; however, those timed for years to come will find itself losing UTXO referenes (e.g. a will). - I assumed all transactions standard, but if they are not, those who can redeem them will lose the UTXO references to them after shortening. I am, like you, risking proposing what is already proposed or explaining what is already explained. The thinking around Bitcoin is a big tome! Regards Weiwu Z. From jtimon at jtimon.cc Sun Aug 27 11:33:04 2017 From: jtimon at jtimon.cc (=?UTF-8?B?Sm9yZ2UgVGltw7Nu?=) Date: Sun, 27 Aug 2017 13:33:04 +0200 Subject: [bitcoin-dev] Solving the Scalability Problem Part II - Adam Shem-Tov In-Reply-To: References: Message-ID: Regarding storage space, have you heard about pruning? Probably you should. On 27 Aug 2017 12:27 am, "Adam Tamir Shem-Tov via bitcoin-dev" < bitcoin-dev at lists.linuxfoundation.org> wrote: > Thank You Christian for your response. > > https://bitcointalk.org/index.php?topic=473.0 : I dont see the relevance. > https://bitcointalk.org/index.php?topic=52859.0 : This idea does not seem > to talking about trimming the full node. Trimming the full node is the key, > the full node is what keeps us secure from hackers. If it can be trimmed > without losing security, that would be good, that is what I am proposing. > https://bitcointalk.org/index.php?topic=12376.0 : Same answer as 505.0. > https://bitcointalk.org/index.php?topic=74559.15 : I think his proposal > is similar to mine, unfortunately for us his predictions were way off. He > was trying to fix this problem while believing that in the year 2020 the > blockchain would be 4GB!!! It is not his fault, his prediction was in 2011. > But you can see, by his prediction, which was rational at the time, was way > off. And it stresses my point, we need to fix this now. Too bad, no one > took him seriously back then, when the block chain i was 1GB. > *https://bitcointalk.org/index.php?topic=56226.0 > *: Another guy with a > valid point, who was first acknowledged and then apparently ignored. > . > To summarize, this problem was brought up about 6 years ago, when the > blockchain was 1GB in size, Now it is about 140GB in size. I think it is > about time we stop ignoring this problem, and realize something needs to > change, or else the only full-nodes you will have will be with private > multi-million dollar companies, because no private citizen will have the > storage space to keep it. That would make bitcoin the worst decentralized > or uncentralized system in history. > > > On 27 August 2017 at 00:42, Christian Riley wrote: > >> There have been a number of similar (identical?) proposals over the >> years, some were discussed in these threads: >> https://bitcointalk.org/index.php?topic=56226.0 >> https://bitcointalk.org/index.php?topic=505.0 >> https://bitcointalk.org/index.php?topic=473.0 >> https://bitcointalk.org/index.php?topic=52859.0 >> https://bitcointalk.org/index.php?topic=12376.0 >> https://bitcointalk.org/index.php?topic=74559.15 >> >> >> On Aug 26, 2017, at 5:01 PM, Adam Tamir Shem-Tov via bitcoin-dev < >> bitcoin-dev at lists.linuxfoundation.org> wrote: >> >> Solving the Scalability Problem Part II >> -------------------------------------------------------------------- >>
>> In the previous post I showed a way to minimize the blocks on the block >> chain, to lower the amount of space it takes on the hard drive, without >> losing any relevant information. >> I added a note, saying that the transaction chain needs to be rewritten, >> but I did not give much detail to it.
>> Here is how that would work:
>> The Genesis Account: >> -----------------------------------------
>> The problem with changing the transaction and block chain, is that it >> cannot be done without knowing the private key of the sender of the of the >> funds for each account. There is however a way to circumvent that problem. >> That is to create a special account called the ?Genesis Account?, this >> account?s Private Key and Public Key will be available to everyone.
>> But this account will not be able to send or receive any funds in a >> normal block, it will be blocked--blacklisted. So no one can intentionally >> use it. The only time this account will be used is in the pruning block, >> a.k.a Exodus Block.
>> When creating the new pruned block chain and transaction chain, all the >> funds that are now in accounts must be legitimate, and it would be >> difficult to legitimize them unless they were sent from a legitimate >> account, with a public key, and a private key which can be verified. That >> is where the Genesis account comes in. All funds in the Exodus Block will >> show as though they originated and were sent from the Genesis Account using >> its privatekey to generate each transaction.
>> The funds which are sent, must match exactly the funds existing in the >> most updated ledger in block 1000 (the last block as stated in my previous >> post).
>> In this way the Exodus Block can be verified, and the Genesis Account >> cannot give free money to anyway, because if someone tried to, it would >> fail verification.
>> >>
>> Now the next problem is that the number of Bitcoins keeps expanding and >> so the funds in the Genesis Account need to expand as well. That can be >> done by showing as though this account is the account which is mining the >> coins, and it will be the only account in the Exodus Block which ?mines? >> the coins, and receives the mining bonus. In the Exodus Block all coins >> mined by the real miners will show as though they were mined by Genesis and >> sent to the miners through a regular transaction. >> >>
>> >> Adam Shem-Tov >> >> >> _______________________________________________ >> bitcoin-dev mailing list >> bitcoin-dev at lists.linuxfoundation.org >> https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev >> >> > > _______________________________________________ > bitcoin-dev mailing list > bitcoin-dev at lists.linuxfoundation.org > https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From dermoth at aei.ca Sun Aug 27 05:18:32 2017 From: dermoth at aei.ca (Thomas Guyot-Sionnest) Date: Sun, 27 Aug 2017 01:18:32 -0400 Subject: [bitcoin-dev] Solving the Scalability Problem on Bitcoin In-Reply-To: References: Message-ID: <0dd361fb-8983-44c4-7f14-cd1e43049feb@aei.ca> How do you trust your <1000 block blockchain if you don't download/validate the whole thing? (I know it should be easy to spot that by looking at the blocks/tx or comparing to other nodes, but from a programmatic point of view this is much harder). You can of course include a checkpoint in the code to tell which recent block is valid (which is already done afaik), but you still need all blocks from that checkpoint to validate the chain (not 10!). If you rely on such checkpoint, why not just include the UTXO's as well so you can start mid-way based on code trust? Indeed pruning doesn't allow you to start mid-way yet but there are much easier solutions to that than what you propose. -- Thomas On 26/08/17 06:32 PM, Adam Tamir Shem-Tov wrote: > Thank you Thomas for your response. > > 1) Implement solution is impossible... I have given a solution in part > II. By adding a Genesis Account which will be the new sender. > > 2)Keeping older blocks: Yes as I said 10 older blocks should be kept, > that should suffice. I am not locked on that number, if you think > there is a reason to keep more than that, it is open to debate. > > 3) Why 1000? To be honest, that number came off the top of my head. > These are minor details, the concept must first be accepted, then we > can work on the minor details. > > 4)Finally it's not just the addresses and balance you need to save... > I think the Idea of the Genesis Account, solves this issue. > > 5) The problem with node pruning is that it is not standardized, and > for a new node to enter the network and to verify the data, it needs > to download all data and prune it by itself. This will drastically > lower the information needed by the full nodes by getting rid of the > junk. Currently we are around 140GB, that number is getting bigger > exponentially, by the number of users and transactions created. It > could reach a Terrabyte sooner than expected, we need to act now. > > On your second email: > When I say account: I mean private-public key. > The way bitcoin works, as I understand it, is that the funds are > verified by showing that they have an origin, this "origin" needs to > provide a signature, otherwise the transaction won't be accepted. > If I am proposing to remove all intermediate origins, then the funds > become untraceable and hence unverifiable. To fix that, a new > transaction needs to replace old ones. A simplified version: If there > was a transaction chain A->B->C->D, and I wish to show only A->D, only > a transaction like that never actually occurred, it would be > impossible to say that it did without having A's private key, in order > to sign this transaction. In order to create this transaction, I need > A's private key. And if I wish this to be publicly implemented I need > this key to be public, so that any node creating this Exodus Block can > sign with it. Hence the Genesis Account. And yes, it is not really an > account. From tshachaf at gmail.com Sun Aug 27 04:09:08 2017 From: tshachaf at gmail.com (Adam Tamir Shem-Tov) Date: Sun, 27 Aug 2017 07:09:08 +0300 Subject: [bitcoin-dev] Revised - Solving the Scalability Problem on Bitcoin Message-ID: This is a link to the most updated version of the problem and my proposed solution, granted it still needs work, but this problem needs to be resolved quickly. So I hope it will receive the attention it deserves, even if the solution comes from somebody else. https://bitcointalk.org/index.php?topic=2126152.new#new The latest version of the day: *Solving the Scalability issue for Bitcoin * *What am I trying to solve?* Currently Bitcoin?s blockchain is around 140GB. In 2011 it took 1GB, and it was predicted back then that in 2020 that size would be 4GB. As you can see it is not yet 2020, and we are way over that predicted size. At our current time, prune nodes which make the block smaller, but they can not be validated without the full node. And this full node is getting exponentially bigger, we need to stop that. Because if we don?t no private citizen will have the capability of storing the full node in his computer, and all full nodes will be at private multi-million dollar companies. That would literally be the end of decentralization (or non-centralization). What I am proposing also makes sure the blockchain has a maximum finite size, because today the blockchain can grow to any size without limit while it approaches an infinite size! Today our blockchain is growing at speed which is much faster than Moore?s law! This proposal will help set storage growth at a reasonable number. *A short list of what I am about to explain: Steps that need to be taken:* --------------------------------------------------------------------------------------------------------------------- (The details are not described in this order) 1) Create a pair of keys, called the Genesis Pair, or Genesis Account, a private and public key which will be publicly known to all and yet it?s use will be restricted and monitored by all. The key will be the source of all funds (Point A). 2) Preserve the Genesis Block, its hash code is needed. And personally I think its of historical value. 3) Combine all Blocks up to the most recent (not including the Genesis Block), and cut out all intermediary transactions, by removing All transactions, and replacing them with new transactions sent from A to every public key which has funds in the most recent block, in the amount they have. And sign these transactions with A?s private-key. And create a new block with this information. 4) This Combined/Pruned Block should point to the Genesis Block hash, and the next block created should point to the Pruned Blocks hash. The random number used for this pruned block will be predefined, this random number normally used to meet the hash difficulty requirement in this case is not needed, since no difficulty setting is necessary for this block, and by predefining it, this block can be easily identified. 5) Download the pruned block from another node or create it yourself, the hash code will be identical for everyone, since the block will be created exactly the same everywhere. 6) Preserve a certain amount of the most recent blocks, just in case a longer blockchain is discovered, and then the Pruned Block should be recalculated. --------------------------------------------------------------------------------------------------------------------- *Now for a more detailed description: * I have this idea to solve the scalability problem I wish to make public. If I am wrong I hope to be corrected, and if I am right we will all gain by it. Currently each block is being hashed, and in its contents are the hash of the block preceding it, this goes back to the genesis block. What if we decide, for example, we decide to combine and prune the blockchain in its entirety every 999 blocks to one block (Genesis block not included in count). How would this work?: Once block 1000 has been created, the network would be waiting for a special "pruned block", and until this block was created and verified, block 1001 would not be accepted by any nodes. This pruned block would prune everything from block 2 to block 1000, leaving only the genesis block. Blocks 2 through 1000, would be calculated, to create a summed up transaction of all transactions which occurred in these 999 blocks. And its hash pointer would be the Genesis block. This block would now be verified by the full nodes (or created by them), which if accepted would then be willing to accept a new block (block 1001, not including the pruned block in the count). The new block 1001, would use as its hash pointer the pruned block as its reference. And the count would begin again to the next 1000. The next pruned block would be created, its hash pointer will be referenced to the Genesis Block. And so on.. In this way the ledger will always be a maximum of 1000 blocks. A bit more detail: All the relevant outputs needed to verify early transactions will all be preserved in the pruning block. The only information you lose are of the intermediate transactions, not the final ones the community has already accepted. Although the origin of the funds could not be known, there destination is preserved, as well a validation that the transactions are legitimate. For example: A = 2.3 BTC, B=0 BTC, C=1.4 BTC. (Block 1) If A sends 2.3 BTC to B. (Block 2) And then B sends 1.5 BTC to C. (Block 3) The pruning block will report: A->B = 0.8 BTC and A->C=2.9 BTC. The rest of the information you lose, is irrelevant. No one needs to know, what exactly happened, who sent who what, or when. All that is needed is the funds currently owned by each key. Note: The Transaction Chain would also need to be rewritten, to delete all intermediate transactions, it will show as though transactions occurred from the Genesis block directly to the pruned block, as though nothing ever existed in between. This will be described below in more detail. You can keep the old blocks on your drive for 10 more blocks or so, just in case a longer block chain is found, but other than that the information it holds is useless, since it has all been agreed upon. And the pruning block holds all up to date account balances, so cheating is impossible. Granted this pruning block can get extremely large in the future, it will not be the regular size of the other blocks. For example if every account has only 1 satoshi in it, which is the minimum, then the amount of accounts will be at its maximum. Considering a transaction is about 256bytes. That would mean the pruning block would be approximately 500PB, which is 500,000 Terra-bytes. That is a theoretical scenario, which is not likely to occur. (256bytes*21M BTC*100M (satoshis in 1 BTC)) A scenario which could be solved by creating a minimum transaction fee of for example: 100 satoshis, which would insure that even in the most unlikely scenario, at worst the pruning block would be 5PB in size. Which is still extremely large for today. But without implementing this idea the blockchain literally does not have a finite maximum size, and over time approaches infinity! *Also, this pruning block does not even need to be downloaded, it could be created by already existing information, each full node by itself, by: * 1) combining and pruning all previous blocks 2) using the genesis block as its hash pointer 3) using a predefined random number "2", which will be used by all. A random number which is normally added to a block to ensure the block's hash-rate difficulty, is not needed in this case, since all information can be verified by each node by itself through pruning. This number can also be used to identify this block as the Pruned/Combined Block since it is static. 4) Any other information which is needed for the SHA256 hash, for example a time-stamp could be copied off the last block in the block chain. These steps will ensure each full node, will get the exact hash code as the others have gotten for this pruning block. And as I previously stated the next block will use this hash code as its hash reference. By creating a system like this, the pruning block does not have to be created last minute, but gradually over time, every time a new block comes in, and only when the last block arrives (block 1000), will it be finalized, and hashed. And since this block will always be second, it should go by the name "Exodus Block". Above, I showed a way to minimize the blocks on the block chain, to lower the amount of space it takes on the hard drive, without losing any relevant information. I added a note, saying that the transaction chain needs to be rewritten, but I did not give much detail to it. Here is how that would work: *The Genesis Account (Key Pair):* --------------------------------------------------- The problem with changing the transaction and block chain, is that it cannot be done without knowing the private key of the sender of the of the funds for each account. To illustrate the problem: If we have a series of block chains with a string of transactions that are A?B?C?D, and to simplify the problem, all money was sent during each transaction, so that no money is left in A or B or C. And I was to prune these transactions, by replacing them with A?D. Only this transaction never occurred, nor can anyone create it without A?s private key. There is however a way to circumvent that problem. That is to create a special account called the ?Genesis Account?, this account?s Private Key and Public Key will be available to everyone. (Of course, accounts do not really exist in Bitcoin, when I say account what I really mean is a Private/Public Key pair) This account will be the source of all funds But this account will not be able to send or receive any funds in a normal block, it will be blocked--blacklisted. So no one can intentionally use it. The only time this account will be used is in the pruning block, a.k.a Exodus Block. When creating the new pruned block chain and transaction chain, all the funds that are now in accounts must be legitimate, and it would be difficult to legitimize them unless they were sent from a legitimate private key, which can be verified. That is where the Genesis account comes in. All funds in the Exodus Block will show as though they originated and were sent with the Genesis private-key to generate each transaction. The funds which are sent, must match exactly the funds existing in the most updated ledger in block 1000. In this way the Exodus Block can be verified, and the Genesis Account cannot give free money to anyway, because if someone tried to, it would fail verification. Now the next problem is that the number of Bitcoins keeps expanding and so the funds in the Genesis Account need to expand as well. That can be done by showing as though this account is the account which is mining the coins, and it will be the only account in the Exodus Block which ?mines? the coins, and receives the mining bonus. In the Exodus Block all coins mined by the real miners will show as though they were mined by Genesis and sent to the miners through a regular transaction. I hope this proposal will be implemented as soon as possible so that we can avoid a problem which is growing by the minute. It was brought up about 6 years ago when the blockchain was only 1GB in size, nobody imagined back then that it would grow so quickly, and the problem was ignored. Today all solutions implemented have been implemented by software, and not on the blockchain itself, these solutions are not helpful in the long run. The full node needs to be publicly available to everyone, and at this rate, nobody will have the hard-drive capacity to store. This will make us more dependent on private corporation?s to store the blockchain, which will lead us quickly to a centralized currency platform. By then it will be too late, and the corporations will have complete control of what happens next. Please take this problem seriously and work with me, to prevent it while we still have some time. The exact details can be worked out at a later time, but for now we need at least an acknowledgment that this problem is dire, and needs to be solved in a year?s time. I have presented a solution, if someone has a better one, then let him/her step forward, but in any case a solution needs to be implemented as soon as possible. *I have given a basic proposal, I am sure there are those among us with more technical understanding to the nuances of how this idea should be implemented. I am counting on their help to see this through.* Adam Shem-Tov -------------- next part -------------- An HTML attachment was scrubbed... URL: From btcideas at protonmail.com Sun Aug 27 03:52:57 2017 From: btcideas at protonmail.com (Btc Ideas) Date: Sat, 26 Aug 2017 23:52:57 -0400 Subject: [bitcoin-dev] Solving the Scalability Problem on Bitcoin In-Reply-To: References: Message-ID: <8Hljr4mi0oyyWjgWduIZ-4Q9rj8pCr-q3pEUJGoXcvp6h6YzsqBfFO30AKwhhoU0Sjm-lIKPI2h_0Vua52kROZztmnDCjaHm4egsZ8vZXD8=@protonmail.com> I also like only keeping the last "n" blocks. Every "n" minus all the previous balances are kept, but the transactions are deleted. There's good enough record keeping, and there's excessive. Part of scaling is being able to get the blockchain and sync quickly. Jason -------- Original Message -------- On Aug 27, 2017, 05:31, Thomas Guyot-Sionnest via bitcoin-dev wrote: > Pruning is already implemented in the nodes... Once enabled only unspent inputs and most recent blocks are kept. IIRC there was also a proposal to include UTXO in some blocks for SPV clients to use, but that would be additional to the blockchain data. > > Implementing your solution is impossible because there is no way to determine authenticity of the blockchain mid way. The proof that a block hash leads to the genesis block is also a proof of all the work that's been spent on it (the years of hashing). At the very least we'd have to keep all blocks until a hard-coded checkpoint in the code, which also means that as nodes upgrades and prune more blocks older nodes will have difficulty syncing the blockchain. > > Finally it's not just the addresses and balance you need to save, but also each unspent output block number, tx position and script that are required for validation on input. That's a lot of data that you're suggesting to save every 1000 blocks (and why 1000?), and as said earlier it doesn't even guarantee you can drop older blocks. I'm not even going into the details of making it work (hard fork, large block sync/verification issues, possible attack vectors opened by this...) > > What is wrong with the current implementation of node pruning that you are trying to solve? > > -- > Thomas > > On 26/08/17 03:21 PM, Adam Tamir Shem-Tov via bitcoin-dev wrote: > >> Solving the Scalability issue for bitcoin
>> >> I have this idea to solve the scalability problem I wish to make public. >> >> If I am wrong I hope to be corrected, and if I am right we will all gain by it.
>> >> Currently each block is being hashed, and in its contents are the hash of the block preceding it, this goes back to the genesis block. >> >>
>> >> What if we decide, for example, we decide to combine and prune the blockchain in its entirety every 999 blocks to one block (Genesis block not included in count). >> >>
>> >> How would this work?: Once block 1000 has been created, the network would be waiting for a special "pruned block", and until this block was created and verified, block 1001 would not be accepted by any nodes. >> >> This pruned block would prune everything from block 2 to block 1000, leaving only the genesis block. Blocks 2 through 1000, would be calculated, to create a summed up transaction of all transactions which occurred in these 999 blocks. >> >>
>> >> And its hash pointer would be the Genesis block. >> >> This block would now be verified by the full nodes, which if accepted would then be willing to accept a new block (block 1001, not including the pruned block in the count). >> >>
>> >> The new block 1001, would use as its hash pointer the pruned block as its reference. And the count would begin again to the next 1000. The next pruned block would be created, its hash pointer will be referenced to the Genesis Block. And so on.. >> >>
>> >> In this way the ledger will always be a maximum of 1000 blocks. -------------- next part -------------- An HTML attachment was scrubbed... URL: From lescoutinhovr at gmail.com Sun Aug 27 12:10:19 2017 From: lescoutinhovr at gmail.com (Leandro Coutinho) Date: Sun, 27 Aug 2017 09:10:19 -0300 Subject: [bitcoin-dev] Solving the Scalability Problem on Bitcoin In-Reply-To: References: Message-ID: >>> 5) The problem with node pruning is that it is not standardized, and for a new node to enter the network and to verify the data, it needs to download all data and prune it by itself. This will drastically lower the information needed by the full nodes by getting rid of the junk. Currently we are around 140GB, that number is getting bigger exponentially, by the number of users and transactions created. It could reach a Terrabyte sooner than expected, we need to act now. To have to download all blockchain for then prune is a big drawback. So I thought about the concept of "trusted" nodes, where you could choose some nodes to connect and from which block you want to download. Of course they would do this by their own risk, but there are ways to minimize the risk, like: - check the latest blocks (hashes) if they match what you find in some sites, like blockchain.info - download and compare the utxo from all (some) the nodes you are connected Currently utxo size is around 2GB and we cant know how fast it will grow (?) Em 26/08/2017 19:39, "Adam Tamir Shem-Tov via bitcoin-dev" < bitcoin-dev at lists.linuxfoundation.org> escreveu: Thank you Thomas for your response. 1) Implement solution is impossible... I have given a solution in part II. By adding a Genesis Account which will be the new sender. 2)Keeping older blocks: Yes as I said 10 older blocks should be kept, that should suffice. I am not locked on that number, if you think there is a reason to keep more than that, it is open to debate. 3) Why 1000? To be honest, that number came off the top of my head. These are minor details, the concept must first be accepted, then we can work on the minor details. 4)Finally it's not just the addresses and balance you need to save... I think the Idea of the Genesis Account, solves this issue. 5) The problem with node pruning is that it is not standardized, and for a new node to enter the network and to verify the data, it needs to download all data and prune it by itself. This will drastically lower the information needed by the full nodes by getting rid of the junk. Currently we are around 140GB, that number is getting bigger exponentially, by the number of users and transactions created. It could reach a Terrabyte sooner than expected, we need to act now. On your second email: When I say account: I mean private-public key. The way bitcoin works, as I understand it, is that the funds are verified by showing that they have an origin, this "origin" needs to provide a signature, otherwise the transaction won't be accepted. If I am proposing to remove all intermediate origins, then the funds become untraceable and hence unverifiable. To fix that, a new transaction needs to replace old ones. A simplified version: If there was a transaction chain A->B->C->D, and I wish to show only A->D, only a transaction like that never actually occurred, it would be impossible to say that it did without having A's private key, in order to sign this transaction. In order to create this transaction, I need A's private key. And if I wish this to be publicly implemented I need this key to be public, so that any node creating this Exodus Block can sign with it. Hence the Genesis Account. And yes, it is not really an account. On 27 August 2017 at 00:31, Thomas Guyot-Sionnest wrote: > Pruning is already implemented in the nodes... Once enabled only unspent > inputs and most recent blocks are kept. IIRC there was also a proposal to > include UTXO in some blocks for SPV clients to use, but that would be > additional to the blockchain data. > > Implementing your solution is impossible because there is no way to > determine authenticity of the blockchain mid way. The proof that a block > hash leads to the genesis block is also a proof of all the work that's been > spent on it (the years of hashing). At the very least we'd have to keep all > blocks until a hard-coded checkpoint in the code, which also means that as > nodes upgrades and prune more blocks older nodes will have difficulty > syncing the blockchain. > > Finally it's not just the addresses and balance you need to save, but also > each unspent output block number, tx position and script that are required > for validation on input. That's a lot of data that you're suggesting to > save every 1000 blocks (and why 1000?), and as said earlier it doesn't even > guarantee you can drop older blocks. I'm not even going into the details of > making it work (hard fork, large block sync/verification issues, possible > attack vectors opened by this...) > > What is wrong with the current implementation of node pruning that you are > trying to solve? > > -- > Thomas > > On 26/08/17 03:21 PM, Adam Tamir Shem-Tov via bitcoin-dev wrote: > > Solving the Scalability issue for bitcoin
> > I have this idea to solve the scalability problem I wish to make public. > > If I am wrong I hope to be corrected, and if I am right we will all gain > by it.
> > Currently each block is being hashed, and in its contents are the hash of > the block preceding it, this goes back to the genesis block. > >
> > What if we decide, for example, we decide to combine and prune the > blockchain in its entirety every 999 blocks to one block (Genesis block not > included in count). > >
> > How would this work?: Once block 1000 has been created, the network would > be waiting for a special "pruned block", and until this block was created > and verified, block 1001 would not be accepted by any nodes. > > This pruned block would prune everything from block 2 to block 1000, > leaving only the genesis block. Blocks 2 through 1000, would be calculated, > to create a summed up transaction of all transactions which occurred in > these 999 blocks. > >
> > And its hash pointer would be the Genesis block. > > This block would now be verified by the full nodes, which if accepted > would then be willing to accept a new block (block 1001, not including the > pruned block in the count). > >
> > The new block 1001, would use as its hash pointer the pruned block as its > reference. And the count would begin again to the next 1000. The next > pruned block would be created, its hash pointer will be referenced to the > Genesis Block. And so on.. > >
> > In this way the ledger will always be a maximum of 1000 blocks. > > > _______________________________________________ bitcoin-dev mailing list bitcoin-dev at lists.linuxfoundation.org https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev -------------- next part -------------- An HTML attachment was scrubbed... URL: From matthew.beton at gmail.com Sun Aug 27 13:19:25 2017 From: matthew.beton at gmail.com (Matthew Beton) Date: Sun, 27 Aug 2017 13:19:25 +0000 Subject: [bitcoin-dev] Solving the Scalability Problem on Bitcoin Message-ID: I think a slight problem with this is that wallets (often ones made by third party wallet software) do not fully empty. I don't know how often this happens, but some wallets, even if you tell them to send all funds, leave a small fraction of bitcoin remaining. If this is the case, it could be detrimental to the 'pruning idea', as wallets with any coins left cannot be pruned. For example: A has 1 BTC A -> B -> C If these wallets are not removing all the BTC, and a fraction is left over, B will not be able to be pruned out of the chain. On the other hand, of the wallets are completely emptied, the new 'pruned block' will be able to show A sending 1btc to C. This could be a problem, and so we need a way to persuade people to get their wallets to send everything instead of leaving a small fraction left over. I don't know how problematic this could be, or how frequently this happens, but I'm just putting it out there. -------------- next part -------------- An HTML attachment was scrubbed... URL: From optimiz3 at hotmail.com Mon Aug 28 15:29:31 2017 From: optimiz3 at hotmail.com (Alex Nagy) Date: Mon, 28 Aug 2017 15:29:31 +0000 Subject: [bitcoin-dev] P2WPKH Scripts, P2PKH Addresses, and Uncompressed Public Keys Message-ID: Let's say Alice has a P2PKH address derived from an uncompressed public key, 1MsHWS1BnwMc3tLE8G35UXsS58fKipzB7a (from https://bitcoin.stackexchange.com/questions/3059/what-is-a-compressed-bitcoin-key). If Alice gives Bob 1MsHWS1BnwMc3tLE8G35UXsS58fKipzB7a, is there any way Bob can safely issue Native P2WPKH outputs to Alice? BIPs 141 and 143 make it very clear that P2WPKH scripts may only derive from compressed public-keys. Given this restriction, assuming all you have is a P2PKH address - is there any way for Bob to safely issue spendable Native P2WPKH outputs to Alice? The problem is Bob as no idea whether Alice's P2PKH address represents a compressed or uncompressed public-key, so Bob cannot safely issue a Native P2WPKH output. AFAICT all code is supposed to assume P2WPHK outputs are compressed public-key derived. The conclusion would be that the existing P2PKH address format is generally unsafe to use with SegWit since P2PKH addresses may be derived from uncompressed public-keys. Am I missing something here? Referencing BIP141 and BIP143, specifically these sections: https://github.com/bitcoin/bips/blob/master/bip-0141.mediawiki#New_script_semantics "Only compressed public keys are accepted in P2WPKH and P2WSH" https://github.com/bitcoin/bips/blob/master/bip-0143.mediawiki#Restrictions_on_public_key_type "As a default policy, only compressed public keys are accepted in P2WPKH and P2WSH. Each public key passed to a sigop inside version 0 witness program must be a compressed key: the first byte MUST be either 0x02 or 0x03, and the size MUST be 33 bytes. Transactions that break this rule will not be relayed or mined by default. Since this policy is preparation for a future softfork proposal, to avoid potential future funds loss, users MUST NOT use uncompressed keys in version 0 witness programs." -------------- next part -------------- An HTML attachment was scrubbed... URL: From riccardo.casatta at gmail.com Mon Aug 28 15:50:23 2017 From: riccardo.casatta at gmail.com (Riccardo Casatta) Date: Mon, 28 Aug 2017 17:50:23 +0200 Subject: [bitcoin-dev] "Compressed" headers stream Message-ID: Hi everyone, the Bitcoin headers are probably the most condensed and important piece of data in the world, their demand is expected to grow. When sending a stream of continuous block headers, a common case in IBD and in disconnected clients, I think there is a possible optimization of the transmitted data: The headers after the first could avoid transmitting the previous hash cause the receiver could compute it by double hashing the previous header (an operation he needs to do anyway to verify PoW). In a long stream, for example 2016 headers, the savings in bandwidth are about 32/80 ~= 40% without compressed headers 2016*80=161280 bytes with compressed headers 80+2015*48=96800 bytes What do you think? In OpenTimestamps calendars we are going to use this compression to give lite-client a reasonable secure proofs (a full node give higher security but isn't feasible in all situations, for example for in-browser verification) To speed up sync of a new client Electrum starts with the download of a file ~36MB containing the first 477637 headers. For this kind of clients could be useful a common http API with fixed position chunks to leverage http caching. For example /headers/2016/0 returns the headers from the genesis to the 2015 header included while /headers/2016/1 gives the headers from the 2016th to the 4031. Other endpoints could have chunks of 20160 blocks or 201600 such that with about 10 http requests a client could fast sync the headers -- Riccardo Casatta - @RCasatta -------------- next part -------------- An HTML attachment was scrubbed... URL: From gsanders87 at gmail.com Mon Aug 28 16:13:11 2017 From: gsanders87 at gmail.com (Greg Sanders) Date: Mon, 28 Aug 2017 12:13:11 -0400 Subject: [bitcoin-dev] "Compressed" headers stream In-Reply-To: References: Message-ID: Is there any reason to believe that you need Bitcoin "full security" at all for timestamping? On Mon, Aug 28, 2017 at 11:50 AM, Riccardo Casatta via bitcoin-dev < bitcoin-dev at lists.linuxfoundation.org> wrote: > Hi everyone, > > the Bitcoin headers are probably the most condensed and important piece of > data in the world, their demand is expected to grow. > > When sending a stream of continuous block headers, a common case in IBD > and in disconnected clients, I think there is a possible optimization of > the transmitted data: > The headers after the first could avoid transmitting the previous hash > cause the receiver could compute it by double hashing the previous header > (an operation he needs to do anyway to verify PoW). > In a long stream, for example 2016 headers, the savings in bandwidth are > about 32/80 ~= 40% > without compressed headers 2016*80=161280 bytes > with compressed headers 80+2015*48=96800 bytes > > What do you think? > > > In OpenTimestamps calendars we are going to use this compression to give > lite-client a reasonable secure proofs (a full node give higher security > but isn't feasible in all situations, for example for in-browser > verification) > To speed up sync of a new client Electrum starts with the download of a > file ~36MB containing > the first 477637 headers. > For this kind of clients could be useful a common http API with fixed > position chunks to leverage http caching. For example /headers/2016/0 > returns the headers from the genesis to the 2015 header included while > /headers/2016/1 gives the headers from the 2016th to the 4031. > Other endpoints could have chunks of 20160 blocks or 201600 such that with > about 10 http requests a client could fast sync the headers > > > -- > Riccardo Casatta - @RCasatta > > _______________________________________________ > bitcoin-dev mailing list > bitcoin-dev at lists.linuxfoundation.org > https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From gsanders87 at gmail.com Mon Aug 28 16:26:48 2017 From: gsanders87 at gmail.com (Greg Sanders) Date: Mon, 28 Aug 2017 12:26:48 -0400 Subject: [bitcoin-dev] "Compressed" headers stream In-Reply-To: References: Message-ID: Well, if anything my question may bolster your use-case. If there's a heavier chain that is invalid, I kind of doubt it matters for timestamping reasons. /digression On Mon, Aug 28, 2017 at 12:25 PM, Riccardo Casatta < riccardo.casatta at gmail.com> wrote: > > 2017-08-28 18:13 GMT+02:00 Greg Sanders : > >> Is there any reason to believe that you need Bitcoin "full security" at >> all for timestamping? >> > > This is a little bit out of the main topic of the email which is the > savings in bandwidth in transmitting headers, any comment about that? > > > P.S. As a personal experience timestamping is nowadays used to prove date > and integrity of private databases containing a lot of value, so yes, in > that cases I will go with Bitcoin "full security" > > >> >> On Mon, Aug 28, 2017 at 11:50 AM, Riccardo Casatta via bitcoin-dev < >> bitcoin-dev at lists.linuxfoundation.org> wrote: >> >>> Hi everyone, >>> >>> the Bitcoin headers are probably the most condensed and important piece >>> of data in the world, their demand is expected to grow. >>> >>> When sending a stream of continuous block headers, a common case in IBD >>> and in disconnected clients, I think there is a possible optimization of >>> the transmitted data: >>> The headers after the first could avoid transmitting the previous hash >>> cause the receiver could compute it by double hashing the previous header >>> (an operation he needs to do anyway to verify PoW). >>> In a long stream, for example 2016 headers, the savings in bandwidth are >>> about 32/80 ~= 40% >>> without compressed headers 2016*80=161280 bytes >>> with compressed headers 80+2015*48=96800 bytes >>> >>> What do you think? >>> >>> >>> In OpenTimestamps calendars we are going to use this compression to give >>> lite-client a reasonable secure proofs (a full node give higher security >>> but isn't feasible in all situations, for example for in-browser >>> verification) >>> To speed up sync of a new client Electrum starts with the download of a >>> file ~36MB containing >>> the first 477637 headers. >>> For this kind of clients could be useful a common http API with fixed >>> position chunks to leverage http caching. For example /headers/2016/0 >>> returns the headers from the genesis to the 2015 header included while >>> /headers/2016/1 gives the headers from the 2016th to the 4031. >>> Other endpoints could have chunks of 20160 blocks or 201600 such that >>> with about 10 http requests a client could fast sync the headers >>> >>> >>> -- >>> Riccardo Casatta - @RCasatta >>> >>> _______________________________________________ >>> bitcoin-dev mailing list >>> bitcoin-dev at lists.linuxfoundation.org >>> https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev >>> >>> >> > > > -- > Riccardo Casatta - @RCasatta > -------------- next part -------------- An HTML attachment was scrubbed... URL: From riccardo.casatta at gmail.com Mon Aug 28 16:25:01 2017 From: riccardo.casatta at gmail.com (Riccardo Casatta) Date: Mon, 28 Aug 2017 18:25:01 +0200 Subject: [bitcoin-dev] "Compressed" headers stream In-Reply-To: References: Message-ID: 2017-08-28 18:13 GMT+02:00 Greg Sanders : > Is there any reason to believe that you need Bitcoin "full security" at > all for timestamping? > This is a little bit out of the main topic of the email which is the savings in bandwidth in transmitting headers, any comment about that? P.S. As a personal experience timestamping is nowadays used to prove date and integrity of private databases containing a lot of value, so yes, in that cases I will go with Bitcoin "full security" > > On Mon, Aug 28, 2017 at 11:50 AM, Riccardo Casatta via bitcoin-dev < > bitcoin-dev at lists.linuxfoundation.org> wrote: > >> Hi everyone, >> >> the Bitcoin headers are probably the most condensed and important piece >> of data in the world, their demand is expected to grow. >> >> When sending a stream of continuous block headers, a common case in IBD >> and in disconnected clients, I think there is a possible optimization of >> the transmitted data: >> The headers after the first could avoid transmitting the previous hash >> cause the receiver could compute it by double hashing the previous header >> (an operation he needs to do anyway to verify PoW). >> In a long stream, for example 2016 headers, the savings in bandwidth are >> about 32/80 ~= 40% >> without compressed headers 2016*80=161280 bytes >> with compressed headers 80+2015*48=96800 bytes >> >> What do you think? >> >> >> In OpenTimestamps calendars we are going to use this compression to give >> lite-client a reasonable secure proofs (a full node give higher security >> but isn't feasible in all situations, for example for in-browser >> verification) >> To speed up sync of a new client Electrum starts with the download of a >> file ~36MB containing >> the first 477637 headers. >> For this kind of clients could be useful a common http API with fixed >> position chunks to leverage http caching. For example /headers/2016/0 >> returns the headers from the genesis to the 2015 header included while >> /headers/2016/1 gives the headers from the 2016th to the 4031. >> Other endpoints could have chunks of 20160 blocks or 201600 such that >> with about 10 http requests a client could fast sync the headers >> >> >> -- >> Riccardo Casatta - @RCasatta >> >> _______________________________________________ >> bitcoin-dev mailing list >> bitcoin-dev at lists.linuxfoundation.org >> https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev >> >> > -- Riccardo Casatta - @RCasatta -------------- next part -------------- An HTML attachment was scrubbed... URL: From greg at xiph.org Mon Aug 28 17:06:04 2017 From: greg at xiph.org (Gregory Maxwell) Date: Mon, 28 Aug 2017 17:06:04 +0000 Subject: [bitcoin-dev] Fwd: P2WPKH Scripts, P2PKH Addresses, and Uncompressed Public Keys In-Reply-To: References: Message-ID: On Mon, Aug 28, 2017 at 3:29 PM, Alex Nagy via bitcoin-dev wrote: > If Alice gives Bob 1MsHWS1BnwMc3tLE8G35UXsS58fKipzB7a, is there any way Bob > can safely issue Native P2WPKH outputs to Alice? Absolutely not. You can only pay people to a script pubkey that they have specified. Trying to construct some alternative one that they didn't specify but in theory could spend would be like "paying someone" by putting a cheque in a locked safe labeled "danger radioactive" that you quietly bury in their back yard. Or taking the payment envelope they gave you stuffing it with cash after changing the destination name to pig latin and hiding it in the nook of a tree they once climbed as a child. There have been technical reasons why some wallets would sometimes display some outputs they didn't generate but could spend, but these cases are flaws-- they're not generic for all cases they could in theory spend, and mostly exist because durability to backup recovery makes it impossible for it to tell what it did or didn't issue. So regardless of your query about uncompressed keys, you cannot do what you described: Wallets will not see the payment and may have no mechanism to recover it even if you tell the recipient what you've done. And yes, the use of an uncompressed yet could later render it unspendable. From greg at xiph.org Mon Aug 28 17:12:15 2017 From: greg at xiph.org (Gregory Maxwell) Date: Mon, 28 Aug 2017 17:12:15 +0000 Subject: [bitcoin-dev] Fwd: "Compressed" headers stream In-Reply-To: References: Message-ID: On Mon, Aug 28, 2017 at 3:50 PM, Riccardo Casatta via bitcoin-dev wrote: > Hi everyone, > > the Bitcoin headers are probably the most condensed and important piece of > data in the world, their demand is expected to grow. > > When sending a stream of continuous block headers, a common case in IBD and > in disconnected clients, I think there is a possible optimization of the > transmitted data: > The headers after the first could avoid transmitting the previous hash cause > the receiver could compute it by double hashing the previous header (an > operation he needs to do anyway to verify PoW). > In a long stream, for example 2016 headers, the savings in bandwidth are > about 32/80 ~= 40% > without compressed headers 2016*80=161280 bytes > with compressed headers 80+2015*48=96800 bytes > > What do you think? You are leaving a lot of bytes on the table. The bits field can only change every 2016 blocks (4 bytes per header), the timestamp can not be less than the median of the last 11 and is usually only a small amount over the last one (saves 2 bytes per header), the block version is usually one of the last few (save 3 bytes per header). But all these things improvements are just a constant factor. I think you want the compact SPV proofs described in the appendix of the sidechains whitepaper which creates log scaling proofs. From kalle at rosenbaum.se Mon Aug 28 17:54:59 2017 From: kalle at rosenbaum.se (Kalle Rosenbaum) Date: Mon, 28 Aug 2017 19:54:59 +0200 Subject: [bitcoin-dev] Fwd: "Compressed" headers stream In-Reply-To: References: Message-ID: 2017-08-28 19:12 GMT+02:00 Gregory Maxwell via bitcoin-dev < bitcoin-dev at lists.linuxfoundation.org>: > > The bits field can only change every 2016 blocks (4 bytes per header), > the timestamp can not be less than the median of the last 11 and is > usually only a small amount over the last one (saves 2 bytes per > header), the block version is usually one of the last few (save 3 > bytes per header). > ... and I guess the nonce can be arbitrarily truncated as well, just brute force the missing bits :-P. > But all these things improvements are just a constant factor. I think > you want the compact SPV proofs described in the appendix of the > sidechains whitepaper which creates log scaling proofs. > I think that my blog post on compact spv proofs can be helpful also. It tries to make the pretty compact formulations in the sidechains paper a bit more graspable by normal people. http://popeller.io/index.php/2016/09/15/compact-spv-proofs/ Kalle -------------- next part -------------- An HTML attachment was scrubbed... URL: From optimiz3 at hotmail.com Mon Aug 28 20:55:47 2017 From: optimiz3 at hotmail.com (Alex Nagy) Date: Mon, 28 Aug 2017 20:55:47 +0000 Subject: [bitcoin-dev] P2WPKH Scripts, P2PKH Addresses, and Uncompressed Public Keys In-Reply-To: References: Message-ID: Thanks Gregory - to be clear should Native P2WPKH scripts only appear in redeem scripts? From reading the various BIPs it had seemed like Native P2WPKH and Native P2WSH were also valid and identifiable if they were encoded in TxOuts. The theoretical use case for this would be saving bytes in Txes with many outputs. -----Original Message----- From: Gregory Maxwell [mailto:gmaxwell at gmail.com] Sent: Monday, August 28, 2017 10:04 AM To: Alex Nagy ; Bitcoin Protocol Discussion Subject: Re: [bitcoin-dev] P2WPKH Scripts, P2PKH Addresses, and Uncompressed Public Keys On Mon, Aug 28, 2017 at 3:29 PM, Alex Nagy via bitcoin-dev wrote: > If Alice gives Bob 1MsHWS1BnwMc3tLE8G35UXsS58fKipzB7a, is there any > way Bob can safely issue Native P2WPKH outputs to Alice? Absolutely not. You can only pay people to a script pubkey that they have specified. Trying to construct some alternative one that they didn't specify but in theory could spend would be like "paying someone" by putting a cheque in a locked safe labeled "danger radioactive" that you quietly bury in their back yard. Or taking the payment envelope they gave you stuffing it with cash after changing the destination name to pig latin and hiding it in the nook of a tree they once climbed as a child. There have been technical reasons why some wallets would sometimes display some outputs they didn't generate but could spend, but these cases are flaws-- they're not generic for all cases they could in theory spend, and mostly exist because durability to backup recovery makes it impossible for it to tell what it did or didn't issue. So regardless of your query about uncompressed keys, you cannot do what you described: Wallets will not see the payment and may have no mechanism to recover it even if you tell the recipient what you've done. And yes, the use of an uncompressed yet could later render it unspendable. From mark at friedenbach.org Mon Aug 28 21:33:52 2017 From: mark at friedenbach.org (Mark Friedenbach) Date: Mon, 28 Aug 2017 14:33:52 -0700 Subject: [bitcoin-dev] P2WPKH Scripts, P2PKH Addresses, and Uncompressed Public Keys In-Reply-To: References: Message-ID: > On Aug 28, 2017, at 8:29 AM, Alex Nagy via bitcoin-dev wrote: > > If Alice gives Bob 1MsHWS1BnwMc3tLE8G35UXsS58fKipzB7a, is there any way Bob can safely issue Native P2WPKH outputs to Alice? > No, and the whole issue of compressed vs uncompressed is a red herring. If Alice gives Bob 1MsHWS1BnwMc3tLE8G35UXsS58fKipzB7a, she is saying to Bob ?I will accept payment to the scriptPubKey [DUP HASH160 PUSHDATA(20)[e4e517ee07984a4000cd7b00cbcb545911c541c4] EQUALVERIFY CHECKSIG]?. Payment to any other scriptPubKey may not be recognized by Alice. -------------- next part -------------- An HTML attachment was scrubbed... URL: From jl2012 at xbt.hk Tue Aug 29 03:30:07 2017 From: jl2012 at xbt.hk (Johnson Lau) Date: Tue, 29 Aug 2017 11:30:07 +0800 Subject: [bitcoin-dev] P2WPKH Scripts, P2PKH Addresses, and Uncompressed Public Keys In-Reply-To: References: Message-ID: <740F886F-6471-4418-A018-1A1B185744C3@xbt.hk> Yes it is allowed in TxOuts. And yes it is designed to save space. But the problem is Bob can?t assume Alice understands the new TxOuts format. If Bob really wants to save space this way, he should first ask for a new BIP173 address from Alice. Never try to convert a P2PKH address to a P2SH or BIP173 address without the consent of the recipient. > On 29 Aug 2017, at 4:55 AM, Alex Nagy via bitcoin-dev wrote: > > Thanks Gregory - to be clear should Native P2WPKH scripts only appear in redeem scripts? From reading the various BIPs it had seemed like Native P2WPKH and Native P2WSH were also valid and identifiable if they were encoded in TxOuts. The theoretical use case for this would be saving bytes in Txes with many outputs. > From simone.bronzini at chainside.net Tue Aug 29 10:19:10 2017 From: simone.bronzini at chainside.net (Simone Bronzini) Date: Tue, 29 Aug 2017 12:19:10 +0200 Subject: [bitcoin-dev] BIP proposal for Lightning-oriented multiaccount multisig HD wallets Message-ID: <8088fa79-8e77-8663-afb4-800a405a6182@chainside.net> -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA256 Hi all, last month we started looking for feedback (here and on other channels) about a proposal for a new structure to facilitate the management of different multisig accounts under the same master key, avoiding key reuse but still allowing cosigners to independently generate new addresses. While previously multiaccount multisig wallets were little used, now that LN is becoming a reality it is extremely important to have a better multiaccount management method to handle multiple payment channels. Please have a look at the draft of the BIP at the link below: https://github.com/chainside/BIP-proposal/blob/master/BIP.mediawiki Any feedback is highly appreciated, but in particular we would like to collect opinions about the following issues: 1. coin_type level: this level is intended to allow users to manage multiple cryptocurrencies or forks of Bitcoin using the same masterkey (similarly to BIP44). We have already received some legit objections that, since we are talking about a Bitcoin Improvement Proposal, it shouldn't care about alt-coins. While we can agree with such objections, we also believe that having a coin_type level improves interoperability with muti-currency wallets (which is good), without any major drawback. Moreover, even a Bitcoin maximalist may hold multiple coins for whatever reason (short term speculation, testing, etc). 2. SegWit addresses: since mixing SegWit and non-SegWit addresses on the same BIP44 structure could lead to UTXOs not being completely recognised by old wallets, BIP49 was proposed to separate the key space. Since this is a new proposal, we can assume that wallets implementing it would be SegWit-compatible and so there should be no need to differetiate between SegWit and non-SegWit pubkeys. Anyway, if someone believes this problem still holds, we thought about two possible solutions: a. Create separate purposes for SegWit and non SegWit addresses (this would keep the same standard as BIP44 and BIP49) b. Create a new level on this proposed structure to divide SegWit and non SegWit addresses: we would suggest to add this new level between cosigner_index and change We believe solution b. would be better as it would give the option of having a multisig wallet with non SegWit-aware cosigners without having to use two different subtrees. This proposal is a work in progess so we would like to receive some feedback before moving on with proposing it as a BIP draft. Simone Bronzini -----BEGIN PGP SIGNATURE----- Comment: Using GnuPG with Thunderbird - http://www.enigmail.net/ iQIzBAEBCAAdFiEErS/wgXh5+C1vqPN/TXSJoN+7oQoFAlmlP2QACgkQTXSJoN+7 oQptgA/7B46/Why5h5/cxWyvgjmuUJ12Rkvh+EtfOUhMX+a8i4PJkLHGB2RibRfR /Li1F+QWd2yeqdNO97er8HDGSlouxB7twB0ZMnS/LRPsHTA3Zf4OoD7H/yjj3lcD GiJGy4MiHEOfjqaIwd0onUPX9ch5+Mm7aL34vBDdK0/8gm2v+HGO+GAefaUnZTQh /CIaM0Th9dDS0xs5wcP3ncNqs1e59MHXOWlh7+zAxfvFio+HHnCbULIe4uct6stC QxTNh8naQD4cB7tV9wsEeyuuJQ1gG8/pgN3WgRu5gW9CGpmpsySJgCCftkTZZHeL eoqGJy5XFbI4CN2wEC2pbWW0xtDNyFq71wUPYNXINn8/7rnSjSl06OKISEk0u1yL vhFuR9RSxEge2cS1pDwIwHVNR6pCeZMRwo0tp1OEXnt5VGGpmKengtpcFkFlOVdd avUueIe8OoFGODco4+f25foB/z/rzyg3REXYX36bZiS6UkUOx4TCGpAzY86i4fDJ STeDy5KMLk1S9rvTNrygxR74DkFMiNkalF3g4VauUlCFmh8iOzEDdtOQ3mLu/pgq MXxfxq6ABxeCmQ7LsuBcFc+wN6AVLhrOhIPGyI8EAyaZNIGByqdgZGubvOl0J/gt Yr4z5fViI7hjJijvooKzFtX0MNnaLBCOlggLpQO58t8En+BiNDE= =XgcB -----END PGP SIGNATURE----- -------------- next part -------------- A non-text attachment was scrubbed... Name: 0xB2E60C73.asc Type: application/pgp-keys Size: 15541 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: 0xB2E60C73.asc.sig Type: application/pgp-signature Size: 566 bytes Desc: not available URL: From luke at dashjr.org Tue Aug 29 20:07:43 2017 From: luke at dashjr.org (Luke Dashjr) Date: Tue, 29 Aug 2017 20:07:43 +0000 Subject: [bitcoin-dev] BIP proposal for Lightning-oriented multiaccount multisig HD wallets In-Reply-To: <8088fa79-8e77-8663-afb4-800a405a6182@chainside.net> References: <8088fa79-8e77-8663-afb4-800a405a6182@chainside.net> Message-ID: <201708292007.44679.luke@dashjr.org> > Status: Proposed This should only be set after peer review and implementations are complete, and you intend that there will be no further changes. > As registered coin types we propose the ones already used for BIP44, which can be found at the following page. I suggest just referring to SLIP 44 directly. You're missing the Backward Compatibility and Copyright sections. On Tuesday 29 August 2017 10:19:10 AM Simone Bronzini via bitcoin-dev wrote: > Hi all, > last month we started looking for feedback (here and on other channels) > about a proposal for a new structure to facilitate the management of > different multisig accounts under the same master key, avoiding key > reuse but still allowing cosigners to independently generate new > addresses. While previously multiaccount multisig wallets were little > used, now that LN is becoming a reality it is extremely important to > have a better multiaccount management method to handle multiple payment > channels. > Please have a look at the draft of the BIP at the link below: > > https://github.com/chainside/BIP-proposal/blob/master/BIP.mediawiki > > Any feedback is highly appreciated, but in particular we would like to > collect opinions about the following issues: > > 1. coin_type level: > this level is intended to allow users to manage multiple > cryptocurrencies or forks of Bitcoin using the same masterkey (similarly > to BIP44). We have already received some legit objections that, since we > are talking about a Bitcoin Improvement Proposal, it shouldn't care > about alt-coins. While we can agree with such objections, we also > believe that having a coin_type level improves interoperability with > muti-currency wallets (which is good), without any major drawback. > Moreover, even a Bitcoin maximalist may hold multiple coins for whatever > reason (short term speculation, testing, etc). > > 2. SegWit addresses: > since mixing SegWit and non-SegWit addresses on the same BIP44 structure > could lead to UTXOs not being completely recognised by old wallets, > BIP49 was proposed to separate the key space. Since this is a new > proposal, we can assume that wallets implementing it would be > SegWit-compatible and so there should be no need to differetiate between > SegWit and non-SegWit pubkeys. Anyway, if someone believes this problem > still holds, we thought about two possible solutions: > a. Create separate purposes for SegWit and non SegWit addresses > (this would keep the same standard as BIP44 and BIP49) > b. Create a new level on this proposed structure to divide SegWit > and non SegWit addresses: we would suggest to add this new level between > cosigner_index and change > > We believe solution b. would be better as it would give the option of > having a multisig wallet with non SegWit-aware cosigners without having > to use two different subtrees. > > This proposal is a work in progess so we would like to receive some > feedback before moving on with proposing it as a BIP draft. > > Simone Bronzini From thomasv at electrum.org Wed Aug 30 10:07:24 2017 From: thomasv at electrum.org (Thomas Voegtlin) Date: Wed, 30 Aug 2017 12:07:24 +0200 Subject: [bitcoin-dev] BIP proposal for Lightning-oriented multiaccount multisig HD wallets In-Reply-To: <8088fa79-8e77-8663-afb4-800a405a6182@chainside.net> References: <8088fa79-8e77-8663-afb4-800a405a6182@chainside.net> Message-ID: On 29.08.2017 12:19, Simone Bronzini via bitcoin-dev wrote: > 2. SegWit addresses: > since mixing SegWit and non-SegWit addresses on the same BIP44 structure > could lead to UTXOs not being completely recognised by old wallets, > BIP49 was proposed to separate the key space. This will lead to old UTXOs not being recognized by NEW wallets, because at some point new wallets will not care about implementing old standards. The only way to address this is to get out of bip39 and bip43, and to include a version number in the mnemonic seed. From simone.bronzini at chainside.net Wed Aug 30 12:22:30 2017 From: simone.bronzini at chainside.net (Simone Bronzini) Date: Wed, 30 Aug 2017 14:22:30 +0200 Subject: [bitcoin-dev] BIP proposal for Lightning-oriented multiaccount multisig HD wallets In-Reply-To: <201708292007.44679.luke@dashjr.org> References: <8088fa79-8e77-8663-afb4-800a405a6182@chainside.net> <201708292007.44679.luke@dashjr.org> Message-ID: Thanks for your feedback, I fixed what you suggested. As for the purpose how should we move on? We would be inclined to use 46, but of course we are open to any other number. On 29/08/17 22:07, Luke Dashjr via bitcoin-dev wrote: >> Status: Proposed > This should only be set after peer review and implementations are complete, > and you intend that there will be no further changes. > >> As registered coin types we propose the ones already used for BIP44, which > can be found at the following page. > > I suggest just referring to SLIP 44 directly. > > You're missing the Backward Compatibility and Copyright sections. > > > > On Tuesday 29 August 2017 10:19:10 AM Simone Bronzini via bitcoin-dev wrote: >> Hi all, >> last month we started looking for feedback (here and on other channels) >> about a proposal for a new structure to facilitate the management of >> different multisig accounts under the same master key, avoiding key >> reuse but still allowing cosigners to independently generate new >> addresses. While previously multiaccount multisig wallets were little >> used, now that LN is becoming a reality it is extremely important to >> have a better multiaccount management method to handle multiple payment >> channels. >> Please have a look at the draft of the BIP at the link below: >> >> https://github.com/chainside/BIP-proposal/blob/master/BIP.mediawiki >> >> Any feedback is highly appreciated, but in particular we would like to >> collect opinions about the following issues: >> >> 1. coin_type level: >> this level is intended to allow users to manage multiple >> cryptocurrencies or forks of Bitcoin using the same masterkey (similarly >> to BIP44). We have already received some legit objections that, since we >> are talking about a Bitcoin Improvement Proposal, it shouldn't care >> about alt-coins. While we can agree with such objections, we also >> believe that having a coin_type level improves interoperability with >> muti-currency wallets (which is good), without any major drawback. >> Moreover, even a Bitcoin maximalist may hold multiple coins for whatever >> reason (short term speculation, testing, etc). >> >> 2. SegWit addresses: >> since mixing SegWit and non-SegWit addresses on the same BIP44 structure >> could lead to UTXOs not being completely recognised by old wallets, >> BIP49 was proposed to separate the key space. Since this is a new >> proposal, we can assume that wallets implementing it would be >> SegWit-compatible and so there should be no need to differetiate between >> SegWit and non-SegWit pubkeys. Anyway, if someone believes this problem >> still holds, we thought about two possible solutions: >> a. Create separate purposes for SegWit and non SegWit addresses >> (this would keep the same standard as BIP44 and BIP49) >> b. Create a new level on this proposed structure to divide SegWit >> and non SegWit addresses: we would suggest to add this new level between >> cosigner_index and change >> >> We believe solution b. would be better as it would give the option of >> having a multisig wallet with non SegWit-aware cosigners without having >> to use two different subtrees. >> >> This proposal is a work in progess so we would like to receive some >> feedback before moving on with proposing it as a BIP draft. >> >> Simone Bronzini > _______________________________________________ > bitcoin-dev mailing list > bitcoin-dev at lists.linuxfoundation.org > https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev -------------- next part -------------- A non-text attachment was scrubbed... Name: 0xB2E60C73.asc Type: application/pgp-keys Size: 15541 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 898 bytes Desc: OpenPGP digital signature URL: From simone.bronzini at chainside.net Wed Aug 30 12:48:24 2017 From: simone.bronzini at chainside.net (Simone Bronzini) Date: Wed, 30 Aug 2017 14:48:24 +0200 Subject: [bitcoin-dev] BIP proposal for Lightning-oriented multiaccount multisig HD wallets In-Reply-To: References: <8088fa79-8e77-8663-afb4-800a405a6182@chainside.net> Message-ID: <2568a65f-3dad-315f-3959-cea6048ab9ba@chainside.net> > This will lead to old UTXOs not being recognized by NEW wallets, because > at some point new wallets will not care about implementing old standards. Your observations make perfect sense. That's exactly why we endorse option b. in my previous email. > The only way to address this is to get out of bip39 and bip43, and to > include a version number in the mnemonic seed. As for the idea of having a versioning on mnemonic seeds, I believe it would be a very useful feature indeed. How about opening a new, separate, topic about it? On 30/08/17 12:07, Thomas Voegtlin via bitcoin-dev wrote: > > On 29.08.2017 12:19, Simone Bronzini via bitcoin-dev wrote: > >> 2. SegWit addresses: >> since mixing SegWit and non-SegWit addresses on the same BIP44 structure >> could lead to UTXOs not being completely recognised by old wallets, >> BIP49 was proposed to separate the key space. > This will lead to old UTXOs not being recognized by NEW wallets, because > at some point new wallets will not care about implementing old standards. > > The only way to address this is to get out of bip39 and bip43, and to > include a version number in the mnemonic seed. > > _______________________________________________ > bitcoin-dev mailing list > bitcoin-dev at lists.linuxfoundation.org > https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev -------------- next part -------------- A non-text attachment was scrubbed... Name: 0xB2E60C73.asc Type: application/pgp-keys Size: 15541 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 898 bytes Desc: OpenPGP digital signature URL: From shiva at blockonomics.co Wed Aug 30 07:24:13 2017 From: shiva at blockonomics.co (shiva sitamraju) Date: Wed, 30 Aug 2017 12:54:13 +0530 Subject: [bitcoin-dev] BIP49 Derivation scheme changes Message-ID: Hi, I wanted to discuss few changes in BIP49 *- Breaking backwards compatibility * The BIP talks about breaking this, and but it really doesn't. I really feel it should completely break this. Here is why What would happen if you recover a wallet using seed words ? 1. Since there is no difference in seed words between segwit/non segwit, the wallet would discover both m/44' and m/49' accounts 2. Note that we cannot ask the user to choose an account he wants to operate on (Segwit/Non segwit). This is like asking him the HD derivation path and a really bad UI 3. The wallet now has to constantly monitor both m/44' and m/49' accounts for transactions Basically we are always stuck with keeping compatibility with older seed words or always asking the user if the seed words came from segwit/non segwit wallet ! Here is my suggestion : 1. By default all new wallets will be created as segwit m/49' without asking user anything. I think you would agree with me that in future we want most wallet to be default segwit (unless user chooses a non segwit from advanced options)! 2. Segwit wallet seed words have a different format which is incompatible with previous wallet seed words. This encodes the information that this wallet is segwit in the seed words itself. We need to define a structure for this *- XPUB Derivation* This is something not addressed in the BIP yet. 1. Right now you can get an xpub balance/transaction history. With m/49' there is no way to know whether an xpub is from m/44' or m/49' 2. This breaks lots of things. Wallets like electrum/armory/mycelium support importing xpub as a watch only wallet. Also services like blockonomics/ blockchain.info use xpub for displaying balance/generating merchant addresses Looking forward to hearing your thoughts -------------- next part -------------- An HTML attachment was scrubbed... URL: From adam.ficsor73 at gmail.com Wed Aug 30 06:45:31 2017 From: adam.ficsor73 at gmail.com (Adam Ficsor) Date: Wed, 30 Aug 2017 07:45:31 +0100 Subject: [bitcoin-dev] ZeroLink Fungibility Framework -- Request for Discussion Message-ID: I've been long working on Bitcoin privacy, mainly on TumbleBit , HiddenWallet and BreezeWallet . ZeroLink is my latest effort to gather all the privacy reasearch I'm familiar with, combine/organize them in a coherent and practical way. The main point of ZeroLink is that "nothing is out of its scope", it is intended to provide complete anonymity on-chain. Amongst its many topics, ZeroLink defines mixing technique, coin selection, private transaction and balance retrieval, transaction input and output indexing and broadcasting and even includes UX recommendations. Users' privacy should not be breached neither on blockchain level, nor on network level. Proposal:https://github.com/nopara73/ZeroLink/ In a nutshell ZeroLink defines a pre-mix wallet, which can be incorporated to any Bitcoin wallet without much implementation overhead. Post-mix wallets on the other hand have strong privacy requirements, so the mixed out coins will not lose their uniformity. The requirements and recommendations for pre and post-mix wallets together define the Wallet Privacy Framework. Coins from pre-mix wallets to post-mix wallets are moved by mixing. Most on-chain mixing techniques, like CoinShuffle, CoinShuffle++ or TumbleBit's Classic Tumbler mode can be used. However ZeroLink defines its own mixing technique: Chaumian CoinJoin, which is based on Gregory Maxwell's 2013 CoinJoin recommendations and ideas . I found this technique to be the most performant, fastest and cheapest one. Regarding adoption SamouraiWallet and HiddenWallet are going to implement and comply with ZeroLink and BreezeWallet also shows significant interest. Regards, nopara73 -------------- next part -------------- An HTML attachment was scrubbed... URL: From belcher at riseup.net Wed Aug 30 16:15:56 2017 From: belcher at riseup.net (Chris Belcher) Date: Wed, 30 Aug 2017 17:15:56 +0100 Subject: [bitcoin-dev] Payment Channel Payouts: An Idea for Improving P2Pool Scalability Message-ID: Pooled mining in bitcoin contributes to miner centralization. P2Pool is one solution but has bad scalability; additional hashers require the coinbase transaction to be larger, bigger miners joining increase the variance of payouts for everyone else, and smaller miners must pay extra to consolidate dust payouts. In this email I propose an improved scheme using payment channels which would allow far more individual hashers to mine on p2pool and result in a much lower payout variance. == Intro == P2Pool is a decentralized pool that works by creating a P2P network of hashers. These hashers work on a chain of shares similar to Bitcoin's blockchain. Each hasher works on a block that includes payouts to the previous shares' owners and the node itself. The point of pooling is to reduce the variance of payout, even though on average the reward is the same (or less with fees). The demand for insurance, and the liquid markets for options show that variance does have costs that people are willing to pay to avoid. Here is an example of a p2pool coinbase transaction: https://blockchain.info/tx/d1a1e125ed332483b6e8e2f128581efc397582fe4c950dc48fadbc0ea4008022 It is 5803 bytes in size, which at a fee rate of 350 sat/b is worth 0.02031050 btc of block space that p2pool cannot sell to any other transaction. As bitcoin inflation goes down and miners are funded more by fees, this puts p2pool at more and more of a disadvantage compared to trusted-third-party mining pools. As each hasher is paid to their own bitcoin address, this limits the number of hashers taking part as adding more individual people to the payout transaction increases its size. Also small payouts cost a disproportionate amount in miner fees to actually spend, which hurts small miners who are essential to a decentralized mining ecosystem. This could maybe be solved by keeping a separate balance state for each user that is independent from the payouts, and make payouts only when that balance state exceeds some reasonable threshold. But this increases the variance which goes against the aim of pooled mining. == Payment Channels == What's needed is a way to use off-chain payments where any number of payments can be sent to each individual hasher without using the blockchain. Then the N of the pay-per-last-N-shares (PPLNS) of p2pool can be increased to something like 6-12 months of shares and so as long as a small miner can mine a share every few months they will always get a payout when p2pool finds a block. The payment channels would be in a hub-and-spokes system and would work in a similar way to coinswap, lightning network, atomic cross-chain swaps or any other contract involving hashlocks and timelocks. There would still be a sharechain but with hashers paying the entire block reward to a hub. This hub would have a one-way payment channel open to every hasher in p2pool and there would be created a situation where if the hub gets paid then the hashers cannot fail to get paid. Because cheating is impossible, the hub and hashers will agree to just release the money to each other without resorting to the blockchain. The coinbase address scriptPubKey would be this, block rewards are paid to here: 2of2 multisig hub + successful hasher OR hub pubkey + H(X) OR successful hasher pubkey + OP_CSV 6 months A 2of2 multisig between the hub and the "successful" hasher which found the block, although with a hashlock and timelock. H(X) is a hash value, where the preimage X is generated randomly by the hub and kept secret, but X will be revealed if the hub spends via that execution path. The OP_CSV execution path is there to stop any holdups or ransom, in the worst case if the hub stalls then the successful hasher can steal the entire coinbase as punishment after 6 months. Each payment channel address has this scriptPubKey: 2of2 multisig hub-C + hasher-C OR 2of2 multisig + H(X) hub-U + hasher-U The pubkeys hub-C/hasher-C refer to 'cooperative' pubkeys. Hub-U/hasher-U refer to 'uncooperative' pubkeys. Before a hasher starts mining the hub will open a one-way payment channel to the hasher, and pays some bitcoin to it (let's say 0.5btc for example). The hashers mine a sharechain, a solved share contains the hasher's cooperative and uncooperative pubkey. The hub keeps up with the sharechain and announces partially-signed transactions going to each hasher. The transactions are updated states of the payment channel, they pay money to each hasher in proportion to the work that the hasher contributed to the sharechain. The transaction contains a signature matching the hub-U pubkey, the hasher could sign it with their hasher-U key and broadcast except they still need the value of X. If a hasher is successful and finds a share that is also a valid bitcoin block, they broadcast it to the network. Now, the hub can spend the block reward money on its own but only by revealing X. Each hasher could then take that X and combine it with the partially-signed transaction and broadcast that to get their money. So if the hub gets paid then the hashers cannot fail to get paid. Since defecting is pointless, the hub signs the hub-C signature of the partially-signed transaction and sends it to each hasher, then the successful hasher signs the 2of2 multisig sending the block reward money to the hub. The successful hasher gets a small bonus via an updated payment channel state for finding the block, to discourage withholding same as today's p2pool. These payment channels can be kept open indefinitely, as new blocks are found by p2pool the hub creates new partially-signed transactions with more money going to each hasher. When the hasher wants to stop mining and get the money, they can add their own hasher-C signature and broadcast it to the network. If there's ever a problem and the hub has to reveal X, then all the payment channels to hashers will have to be closed and reopened with a new X, because their security depends on X being unknown. == Hubs == The hub is a central point of failure. It cannot steal the money, but if it gets DDOS'd or just becomes evil then the whole thing would stop working. This problem could be mitigated by having a federated system, where there are several hubs to choose from and hashers have payment channels open with each of them. It's worth noting that if someone has a strong botnet they could probably DDOS individual p2pool hashers in the same way they DDOS hubs or even centralized mining pools. The hub would need to own many bitcoins in order to have payment channels while waiting for blocks to be mined. Maybe 50 times the block reward which today would be about 650 bitcoins. The hub should receive a small percentage of each block reward to provide them with an incentive, we know from JoinMarket that this percentage will probably be around 0.1% or less for large amounts of bitcoin. Prospostive hub operators should write their bids on a forum somewhere and have their details added to some list on github. Hashers should have an interface for blacklisting, whitelisting, lowering and raising priority for certain hubs in case the hub operators behave badly. As well as the smart contract, there are iterated prisoner's dilemma effects between the hub and the hashers. If the hub cooperates it can expect to make a predictable low-risk income from its held bitcoins for a long time to come, if it does something bad then the hashers can easily call off the deal. The hub operator will require a lot of profit in order to burn its reputation and future income stream, and by damaging the bitcoin ecosystem it will have indirectly damaged its own held bitcoins. A fair pricing plan will probably have the hub taking a small percent to start with and then 12 months later that percentage goes up to take into account the hub's improved reputation. == Transaction Selection == All the hashers and hub need to know the exact value of the block reward in advance, this means they must know what the miner fees will be. This is probably the most serious problem with this proposal. One possible way to solve this is to mine transactions into shares and so use the sharechain to make all the hashers and hubs come to consensus about exactly which transactions they will mine, and so exactly what the total miner fee will be. A problem here is that this consensus mechanism is slow, immediately after a bitcoin block is found all the p2pool hashers will have to wait 30-120 seconds before they know what transactions to mine, so this would make them uncompetitive as a mining operation. Another way to deal with this is to have the hub just choose all the transactions, announcing the transactions, total miner fee and merkle root for the hashers to mine. This would work but allows the hub to control and censor bitcoin transactions, which mostly defeats the point of p2pool as an improvement to bitcoin miner centralization. Another way is to have the hashers and hub estimate what the total miner fee value will be. The estimate could start from the median miner fee of the last few blocks, or from the next 1MB of the mempool. The hub would announce all the partially-signed transactions to every hasher, and then periodically (say every 60 seconds) announce updated versions depending on how the mempool changes. Let's analyze what happens if the estimated and actual rewards are different. If the actual block reward is lower than the estimated reward, then the hub can update the payment channel state to slightly lower values to take that into account when it announces the cooperative hub-C signatures. The hashers can't use the higher channel state without knowing X. The successful hasher will get their bonus for finding the block which should help in encouraging them to actually sign the hub's payout transaction. If the actual block reward is higher than the estimated reward, the hub would hopefully still update the hasher's payment channel states because of the interated-prisoners-dilemma effects. But if the actual reward is much higher then the hub may find it profitable to burn its reputation and take the money by revealing X, one situation where this might happen is if someone accidently pays a very high miner fee and a hasher mines it without it being taken into account in the hub's regular payment channel state updates. Apart from that very specific situation, this scheme of estimating the total miner fee should work. == Some Notes == *) Block rewards are locked for 100 blocks before they can be spent, so the cooperative signatures should be exchanged after 100 blocks just in case the block gets made stale/orphaned. While the hashers are waiting for the 100 reward maturity period, they should mine with another hub as the payout. *) Today's p2pool has a feature for donating to individual hashers, this could be replicated in the payment channel system by having each share also contain the hasher's bitcoin address for donations (or possibly their LN payment code) *) Each hasher should probably be made to pay some bitcoins into the payment channel address too, to stop DOSers locking up all the hub's bitcoins. If the hasher doesn't find a share within some timeout then the hub should close the payment channel. *) Now that we have segwit all these payment channel schemes are much easier to code. *) The hashers must keep their money locked up in the payment channel for months before enough collects. This could be a problem because some miners don't really want to hold bitcoin long term. I wonder if theres some way to link up these channels to LN so they can sold straight away. They could also use futures contracts to sell the coins today at a discount and actually deliver the coins later when they close the channel. == References == *) https://en.bitcoin.it/wiki/P2Pool how p2pool works *) https://bitcointalk.org/index.php?topic=18313.msg13057899#msg13057899 the scaleability problems of p2pool *) https://bitcointalk.org/index.php?topic=18313.msg20943086#msg20943086 making the PPLNS window longer *) book: The Evolution of Co-Operation by Robert Axelrod, for explaining iterated prisoner's dilemma effects in detail Thanks to the p2pool developer veqtrus for reviewing this From erik at q32.com Wed Aug 30 17:14:22 2017 From: erik at q32.com (Erik Aronesty) Date: Wed, 30 Aug 2017 13:14:22 -0400 Subject: [bitcoin-dev] BIP103 to 30MB Message-ID: If you use this formula, with a decaying percentage, it takes about 100 years to get to 30MB, but never goes past that. Since it never passes 32, we don't have to worry about going past that ever... unless another hard fork is done. A schedule like this could allow block size to scale with tech growth asymptotically. Might be nice to include with other things P=17%, Pn = P*0.95 X = 1, Xn = X * (1+P) -------------- next part -------------- An HTML attachment was scrubbed... URL: