What is Bitcoin? Current BTC price, charts and history ...

The Big Blocks Mega Thread

Since this is a pressing and prevalent issue, I thought maybe condensing the essential arguments into one mega thread is better than rehashing everything in new threads all the time. I chose a FAQ format for this so a certain statement can be answered. I don't want to re-post everything here so where appropriate I'm just going to use links.
Disclaimer: This is biased towards big blocks (BIP 101 in particular) but still tries to mention the risks, worries and fears. I think this is fair because all other major bitcoin discussion places severely censor and discourage big block discussion.
 
What is the block size limit?
The block size limit was introduced by Satoshi back in 2010-07-15 as an anti-DoS measure (though this was not stated in the commit message, more info here). Ever since, it has never been touched because historically there was no need and raising the block size limit requires a hard fork. The block size directly limits the number of transactions in a block. Therefore, the capacity of Bitcoin is directly limited by the block size limit.
 
Why does a raise require a hard fork?
Because larger blocks are seen as invalid by old nodes, a block size increase would fork these nodes off the network. Therefore it is a hard fork. However, it is possible to downsize the block limit with a soft fork since smaller blocks would still be seen as valid from old nodes. It is considerably easier to roll out a soft fork. Therefore, it makes sense to roll out a more ambitious hard fork limit and downsize as needed with soft forks if problems arise.
 
What is the deal with soft and hard forks anyways?
See this article by Mike Hearn: https://medium.com/@octskyward/on-consensus-and-forks-c6a050c792e7#.74502eypb
 
Why do we need to increase the block size?
The Bitcoin network is reaching its imposed block size limit while the hard- and software would be able to support more transactions. Many believe that in its current phase of growth, artificially limiting the block size is stifling adoption, investment and future growth.
Read this article and all linked articles for further reading: http://gavinandresen.ninja/time-to-roll-out-bigger-blocks
Another article by Mike Hearn: https://medium.com/@octskyward/crash-landing-f5cc19908e32#.uhky4y1ua (this article is a little outdated since both Bitcoin Core and XT now have mempool limits)
 
What is the Fidelity Effect?
It is the Chicken and Egg problem applied to future growth of Bitcoin. If companies do not see how Bitcoin can scale long term, they don't invest which in turn slows down adoption and development.
See here and here.
 
Does an increase in block size limit mean that blocks immediately get larger to the point of the new block size limit?
No, blocks are as large as there is demand for transactions on the network. But one can assume that if the limit is lifted, more users and businesses will want to use the blockchain. This means that blocks will get bigger, but they will not automatically jump to the size of the block size limit. Increased usage of the blockchain also means increased adoption, investment and also price appreciation.
 
Which are the block size increase proposals?
See here.
It should be noted that BIP 101 is the only proposal which has been implemented and is ready to go.
 
What is the long term vision of BIP 101?
BIP 101 tries to be as close to hardware limitations regarding bandwidth as possible so that nodes can continue running at normal home-user grade internet connections to keep the decentralized aspect of Bitcoin alive. It is believed that it is hard to increase the block size limit, so a long term increase is beneficial to planning and investment in the Bitcoin network. Go to this article for further reading and understand what is meant by "designing for success".
BIP 101 vs actual transaction growth visualized: http://imgur.com/QoTEOO2
Note that the actual growth in BIP 101 is piece-wise linear and does not grow in steps as suggested in the picture.
 
What is up with the moderation and censorship on bitcoin.org, bitcointalk.org and /bitcoin?
Proponents of a more conservative approach fear that a block size increase proposal that does not have "developeexpert consensus" should not be implemented via a majority hard fork. Therefore, discussion about the full node clients which implement BIP 101 is not allowed. Since the same individuals have major influence of all the three bitcoin websites (most notably theymos), discussion of Bitcoin XT is censored and/or discouraged on these websites.
 
What is Bitcoin XT anyways?
More info here.
 
What does Bitcoin Core do about the block size? What is the future plan by Bitcoin Core?
Bitcoin Core scaling plan as envisioned by Gregory Maxwell: http://lists.linuxfoundation.org/pipermail/bitcoin-dev/2015-Decembe011865.html
 
Who governs or controls Bitcoin Core anyways? Who governs Bitcoin XT? What is Bitcoin governance?
Bitcoin Core is governed by a consensus mechanism. How it actually works is not clear. It seems that any major developer can "veto" a change. However, there is one head maintainer who pushes releases and otherwise organizes the development effort. It should be noted that the majority of the main contributors to Bitcoin Core are Blockstream employees.
BitcoinXT follows a benevolent dictator model (as Bitcoin used to follow when Satoshi and later Gavin Andresen were the lead maintainers).
It is a widespread believe that Bitcoin can be separated into protocol and full node development. This means that there can be multiple implementations of Bitcoin that all follow the same protocol and overall consensus mechanism. More reading here. By having multiple implementations of Bitcoin, single Bitcoin implementations can be run following a benevolent dictator model while protocol development would follow an overall consensus model (which is enforced by Bitcoin's fundamental design through full nodes and miners' hash power). It is still unclear how protocol changes should actually be governed in such a model. Bitcoin governance is a research topic and evolving.
 
What are the arguments against a significant block size increase and against BIP 101 in particular?
The main arguments against a significant increase are related to decentralization and therefore robustness against commercial interests and government regulation and intervention. More here (warning: biased Wiki article).
Another main argument is that Bitcoin needs a fee market established by a low block size limit to support miners long term. There is significant evidence and game theory to doubt this claim, as can be seen here.
Finally, block propagation and verification times increase with an increased block size. This in turn increases the orphan rate of miners which means reduced profit. Some believe that this is a disadvantage to small miners because they are not as well connected to other big miners. Also, there is currently a large miner centralization in China. Since most of these miners are behind the Great Firewall of China, their bandwidth to the rest of the world is limited. There is a fear that larger block propagation times favor Chinese miners as long as they have a mining majority. However, there are solutions in development that can drastically reduce block propagation times so this problem will be less of an issue long term.
 
What is up with the fee market and what is the Lightning Network (LN)?
Major Bitcoin Core developers believe that a fee market established by a low block size is needed for future security of the bitcoin network. While many believe fundamentally this is true, there is major dispute if a fee market needs to be forced by a low block size. One of the main LN developers thinks such a fee market through low block size is needed (read here). The Lightning Network is a non-bandwidth scaling solution. It uses payment channels that can be opened and closed using Bitcoin transactions that are settled on the blockchain. By routing transactions through many of these payment channels, in theory it is possible to support a lot more transactions while a user only needs very few payment channels and therefore rarely has to use (settle on) the actual blockchain. More info here.
 
How does LN and other non-bandwidth scaling solutions relate to Bitcoin Core and its long term scaling vision?
Bitcoin Core is headed towards a future where block sizes are kept low so that a fee market is established long term that secures miner incentives. The main scaling solution propagated by Core is LN and other solutions that only sometimes settle transactions on the main Bitcoin blockchain. Essentially, Bitcoin becomes a settlement layer for solutions that are built on top of Bitcoin's core technology. Many believe that long term this might be inevitable. But forcing this off-chain development already today seems counterproductive to Bitcoin's much needed growth and adoption phase before such solutions can thrive. It should also be noted that no major non-bandwidth scaling solution (such as LN) has been tested or even implemented. It is not even clear if such off-chain solutions are needed long term scaling solutions as it might be possible to scale Bitcoin itself to handle all needed transaction volumes. Some believe that the focus on a forced fee market by major Bitcoin Core developers represents a conflict of interest since their employer is interested in pushing off-chain scaling solutions such as LN (more reading here).
 
Are there solutions in development that show the block sizes as proposed via BIP 101 are viable and block propagation times in particular are low enough?
Yes, most notably: Weak Blocks, Thin Blocks and IBLT.
 
What is Segregated Witness (SW) and how does it relate to scaling and block size increases?
See here. SW among other things is a way to increase the block size once without a hard fork (the actual block size is not increased but there is extra information exchanged separately to blocks).
 
Feedback and more of those question/answer type posts (or revised question/answer pairs) appreciated!
 
ToDo and thoughts for expansion:
@Mods: Maybe this could be stickied?
submitted by BIP-101 to btc [link] [comments]

"Initial sync" argument as it applies to BIP 101

I made this post in the comments section in /bitcoin, but I figured I should post it here too to see what /bitcoinxt thinks about this argument.
I was curious how this "initial sync" argument applied to BIP 101, so I plotted it out in a spread sheet. In order to calculate the potential blockchain size, I assumed completely full blocks, which will not likely be the case, so the blockchain size will actually be smaller than what I plot here.
For bandwidth, I assume a 12 mbps (1.5 MB/s) starting point, but ultimately the starting point doesn't really matter. The more important assumption is the growth rate of 50% per year, which is predicted by Nielsen's law.
Year Blockchain size (GB) Bandwidth (MB/s) Initial sync time (s) 2015 48 1.5 32000 2016 468 2.2 208213 2017 889 3.4 263396 2018 1,730 5.1 341713 2019 2,571 7.6 338552 2020 4,253 11.4 373360 2021 5,935 17.1 347345 2022 9,299 25.6 362815 2023 12,662 38.4 329378 2024 19,390 57.7 336254 2025 26,118 86.5 301948 2026 39,573 129.7 305004 2027 53,028 194.6 272473 2028 79,939 291.9 273831 2029 106,850 437.9 244009 2030 160,671 656.8 244612 2031 214,493 985.3 217701 2032 322,136 1,477.9 217970 2033 429,779 2,216.8 193870 2034 645,064 3,325.3 193989 2035 860,350 4,987.9 172488 2036 1,075,636 7,481.8 143766 2037 1,290,922 11,222.7 115027 2038 1,506,207 16,834.1 89474 2039 1,721,493 25,251.2 68175 2040 1,936,779 37,876.8 51134 2041 2,152,065 56,815.1 37878 2042 2,367,350 85,222.7 27778 
As you can see, sync times will rise due to BIP 101, but it peaks in 2020, and then starts declining. By 2042, sync time will actually be less than it is now for the average node.
So, ultimately, I don't think this argument really holds much water. Bitcoin will remain accessible to anyone with a regular Internet connection, even with the most aggressive block size growth proposal.
submitted by timepad to bitcoinxt [link] [comments]

BIP 101 explained

I see a lot of mis-understanding posts around BIP 101. So I will try to explain it as basic as possible. If you find anything wrong, please help me to correct it because my english is not perfect, and hope it’s getting sticked.
What is BIP 101?
BIP 101 proposed by Gavin Andresen is a way to raise blocksize limit from currently 1MB. Right now if you try to send another node a block that bigger than 1MB, it will be ignored. If you want more detail please check here
Why BIP 101 is a hard-fork?
Because it changes a rule in a set of rules (protocol) using in the Bitcoin network. Protocol is how one node talks (send data) to another node.
When the block size limit is raised in BIP-101?
A BIP-101 integrated bitcoin client (101-Node) will check each blocks on the chain, and count up if the version number of a block is 536870919 (101-Block). When the 750th 101-Block is found in 1000 consecutive blocks, 101-Node will use the found time of that block to calculate the activation time. The activation time is 2 weeks after the found time but not before 2016-01-11 00:00:00 UTC (whichever later). And starting from the activation time, 101-Node will accept blocks that are bigger than 1MB. Because every nodes have the same chain, the calculation will be exactly the same on all nodes.
How the limit is raised overtime?
After the activation time, 101 Node will start to accept up-to-8MB block. The limit is continue to go up block after block (from 8MB) at the rate 2x/2years or 2x/63072000 seconds. This will stop at 2036-01-06 00:00:00 UTC. The limit reaches 8,192,000,000 bytes (about 8GB).
Why blocks are used to measure?
You can fake almost everything in a digital world. But solving a math that requires trying every numbers can’t be faked or take a shortcut. In short, you can’t fake a valid block, you have to work for it
Miner’s incentive
Miners are rewarded in bitcoins. The more value of bitcoin is the more profit for miners. So in theory, miners won’t do thing that potentially damages their income and investment
submitted by xd1gital to bitcoinxt [link] [comments]

Bitcoin "knows" how much *hashpower* it has available, and the code automatically adjusts the DIFFICULTY accordingly, every 2 weeks. Could Bitcoin somehow also "know" how much *bandwidth* it has available, so the code could also automatically adjust the MAX_BLOCKSIZE accordingly, every period?

TL;DR:
Are there any feasible "max blocksize" approaches which:
... into computing the new "max blocksize"?
For example, would it be possible to automatically compute the new "max blocksize" using some formula based on the actual amount of bandwidth currently available across the network - similar to way that the new "difficulty" is already set automatically, based on the amount of actual hashpower available across the network?
Reading the various "max blocksize" proposals, I noticed that none of them seem to attempt to compute the new "max blocksize" for the next "period" based on any (direct or indirect) observation of the actual transaction/block-relaying "throughput" and "capacity" of the network itself.
This seems to be in contrast to the way that the new "difficulty" is periodically recomputed - ie, based on the actual hashpower currently available across the network.
This raises the questions:
  • Would it be possible to determine (ie, periodically re-compute) the new "max blocksize" for the next period, based on some actual aspect of the network hardware or infrastructure itself, such as the actual transaction/block-relaying "throughput" and "capacity" of the actual miners and full nodes themselves?
  • Would such a "hardware- or infrastructure-based" approach be desirable?
  • In precisely what sense might Bitcoin be able to:
    • know how much bandwidth it currently has available across the network, and
    • (optimally) adjust the new MAX_BLOCKSIZE accordingly
  • ... involving minimal (or no) "direct" human intervention, and maximal reliance on observation of some actual aspect of the available installed network hardware / infrastructure itself?
There is already a summary of the various major "max blocksize" proposals here:
Summary of Major Blocksize Proposals
https://np.reddit.com/btc/comments/3zuhnu/summary_of_major_blocksize_proposals/
These "max blocksize proposals" could be grouped into two (or three) categories, depending on how much explicit human intervention (if any) is involved every period to establish the new MAX_BLOCKSIZE:
  • "Manual" approaches (based on periodically reconfiguring, voting, etc.);
  • "Automatic" approaches (based on a constant number, or a pre-determined formula);
  • "Zen" approaches (where there isn't even any notion of "max blocksize").
Proposals taking a "hands-on" (manual, reconfigurable) approach
These "hands-on" proposals require some kind of periodic, explicit, manual human intervention over the course of the weeks, months and years - eg, full-node operators and/or miners would each individually periodically set some parameter (or cast a vote) which the code running on the network would then somehow aggregate (generally using some formula which would itself be hard-coded and pre-determined), in order to establish the new value of MAX_BLOCKSIZE for the coming period:
  • BIP 100
  • BÌP 105
  • BitPay Adaptive Blocksize
  • etc.
Possible advantages: Would be more adaptive to dynamically evolving conditions in the future.
Possible disadvantages: Requires more frequent intervention/voting; certain players might be able to "game" the voting.
Proposals taking a "hands-off" (automatic, pre-configured) approach
These proposals do not involve any explicit, manual human intervention over the course of the years - eg, under XT, the MAX_BLOCKSIZE would start at 8 MB and would (smoothly) double every 2 years for the next 20 years, until it reaches 8 GB.
  • Core / Blockstream (currently)
  • BIP 101 / XT
  • BIP 102
  • BIP 103
  • BIP 106
  • BIP 202
  • etc.
Possible advantages: No need for reconfiguring (tinking), voting (politicking).
Possible disadvantages: The network could get "locked in" to a particular "max blocksize" for many years, which might not end up being compatible with the actual infrastructure / hardware which ends up being available in the future.
Notes:
(1) BU (Bitcoin Unlimited) would apparently provide a GUI menu allowing the user to choose among the above proposals.
(2) Current voting results for some of the above proposals can be seen here:
https://data.bitcoinity.org/bitcoin/block_size_votes/7d?c=block_size_votes&r=hour&t=bar
Proposals taking a third, "Zen" approach
It is important to bear in mind that there is probably also a third category of proposals (which we often tend to forget, perhaps because of the irresistible lure of Blocksizing Bikeshedding), where MAX_BLOCKSIZE would not even exist at all, as it would be determined through the "invisible hand" of the market itself.
Actually, it could be argued that such an approach has in some sense already been in effect all along, since the market of miners are already setting their own "max blocksize(s)" in their ongoing calculations to avoid ophaning. In this perspective, the "max blocksize" during this time was such a high "ceiling" above all the actual blocksizes being mined that it didn't really have any impact on them (although this state of affairs may soon be coming to an end).
Proponents of this third, "Zen" approach (based not explicitly on humans or code - but instead implicitly on markets and economics) include tsontar (who tends to post more on /BitcoinMarkets) and long-time Bitcoin luminaries such as gavinandresen and Satoshi Nakamoto himself:
Nobody has been able to convincingly answer the question, "What should the optimal block size limit be?" And the reason nobody has been able to answer that question is the same reason nobody has been able to answer the question, "What should the price today be?" – tsontar
https://np.reddit.com/btc/comments/3xdc9e/nobody_has_been_able_to_convincingly_answer_the/
Gavin Andresen at 23:00 minute mark: "In my heart of hearts, I think everything would be just fine if the block limit was completely eliminated. I think actually nothing bad would happen."
https://np.reddit.com/bitcoinxt/comments/3kz5vo/gavin_andresen_at_2300_minute_mark_in_my_heart_of/
Satoshi Nakamoto, October 04, 2010, 07:48:40 PM "It can be phased in, like: if (blocknumber > 115000) maxblocksize = largerlimit / It can start being in versions way ahead, so by the time it reaches that block number and goes into effect, the older versions that don't have it are already obsolete."
https://np.reddit.com/btc/comments/3wo9pb/satoshi_nakamoto_october_04_2010_074840_pm_it_can/
Remark
In some grandiose albeit perhaps vague or intuitive sense, the raging debate among all the above proposals may constitute one of the most momentous and far-reaching exercises in "user needs & requirements" specification in the history of software development - because the decision about which of these proposals to adopt (or to adopt none of them) has the potential to impact the transactional volume / monetary velocity of the planet's first major cryptocurrency...
...or, as also suggested above, this whole raging debate may constitute mere bikeshedding and human vanity),
...or, as others have more darkly suggested, perhaps this whole raging debate is actually a sign that Bitcoin has been infiltrated in order to destroy in what is perhaps the only way possible: from within, using human frailty and "social engineering".
The 21 million coin total money supply, the difficulty, and the "max blocksize"
The above well-known parameters each enjoy a distinct "status" in Bitcoin.
Satoshi did wisely specify and code, in the first release of the software itself:
  • the total money supply at 21 million Bitcoins (asymptotically reached over the course of decades)
  • the difficulty as a function of the total actual hashpower currently available across the network, recomputed every 2016 blocks or 2 weeks
And he did also communicate or "signal" (outside the code as it were, in the informal yet iron-clad "social compact" clearly understood and embraced by all the users) that the total money supply shall in some sense be "hard" i.e. it shall never be changed.
We're pretty sure (eg, based on the message he encoded in the geneis block) that the reason he did this was to avoid the kinds of unpredictabilities in money supply which have so often plagued private-central-bank-issued debt-based fiat currencies, inevitably causing all of them to eventually be destroyed by hyperinflation).
Meanwhile, Satoshi (perhaps not so wisely?) did not specify the "max blocksize" (in the sense of explicitly and ceremonially specifying a formula for recalculating it, and a reason for keeping it).
Well, he actually (perhaps not so wisely?) kinda did specify it - but only as a temporary anti-spam measure, which he did explicitly state could and should be removed again later).
Summary of questions
Anyways, as you can see, this whole meandering post is basically asking (if we're now stuck with the idea of having a "max blocksize" at all) how feasible it might be to have the "max blocksize" computed as objectively and optimally and realistically as possible - not from a formula, but rather (like the "difficulty") from something based on the current actual state of the hardware and infrastructure itself (eg, from the bandwidth actually currently available across the network).
Maybe there's a reason why none of the above proposals do that - perhaps simply because the "bandwidth actually currently available across the network" isn't something that could easily or meaningfully be "measured"?
By the way, how is the hashpower "measured" for the purposes of recalculating the new "difficulty" anyways? I assume that Bitcoin isn't somehow directly peeking at the hardware itself - but instead is using some kind of "proxy": say, indirectly measuring hashpower based on how fast blocks have been getting solved on the network over the last 2 weeks.
Could a similar indirect approach also be taken regarding "max blocksize" - where perhaps it might not be possible to directly or easily measure the actual bandwidth currently available across the network - but maybe we could indirectly measure such a thing, by also using a "proxy": eg, observing:
  • how fast blocks / transactions have been getting relayed / propagated on the network over the past couple weeks;
  • or: how backlogged the mempool has been getting over the past couple weeks
  • or even: what the recommended fee for inclusion in the next 1-6 blocks has been over the past couple weeks
?
Now, I realize that such "hardware- or infrastructure-based" approaches to recomputing the "max blocksize" might seem to somehow "penalize" miners / full-node operators: as they add more bandwidth, the "max blocksize" would be set higher in response.
But the accepted approach to recomputing the new "difficulty" already seems to do something similar: it essentially "penalizes" miners: as they add more hashpower, the new "difficulty" is set higher in response.
Finally, it seems that perhaps the main factor which Satoshi failed to foresee (and which is playing a major role in fueling this debate), is the Great Firewall of China. I have wondered elsewhere whether we might actually need to take this physical-political fact into account explicitly within Bitcoin itself - as perhaps the best way of being able to "code around it".
By the way, I'm getting the ideas for these kinds of statistics from some of the "Bitcoin network monitoring" sites, eg:
https://tradeblock.com/bitcoin/
https://tradeblock.com/bitcoin/historical/1d-f-txval_per_tot-01071-blksize_per_avg-01071
http://statoshi.info/dashboard/db/fee-and-priority-estimates
http://www.cointape.com/#delay
Nowadays I tend to watch these sites just as much as the "Bitcoin price" sites - since lately it's been starting to look like the Blockstream / Core code running on most of the network, with its hard-coded "max blocksize" limits, may be getting dangerously close to causing serious transaction backlogs.
submitted by ydtm to btc [link] [comments]

Variable Block Size Proposal

Hey Bitcoiners!
While I am an avid Bitcoin supporter, long-term user, and have done development work on tools and platforms surrounding Bitcoin, I have been very busy these past few weeks and haven't had a chance to fully (or closely) monitor the Block Size debate.
I'm familiar with the basics, and have read abstracts about the front-running proposals (BIP 100, 101, and 102). Though I've honestly not read those in depth either. With that said, I was driving yesterday and thought of a potential idea. I'll be clear, this is just an idea, and I haven't fully fleshed it out. But I thought I'd throw it out there and see what people thought.
My Goal:
Provide a variable block size that provides for sustainable, long-term growth, and balances the block propagation, while also being mindful of potential spam attacks.
The Proposal:
Every 2016 blocks (approximately every two weeks, at the same time the difficulty is adjusted), the new block size parameters are calculated.
The calculation determines the average (mean) size of the past 2016 blocks. This "average" size is then doubled (200%) and used as the maximum block size for the subsequent 2016 blocks. At any point, if the new maximum size is calculated to be below 1MB, 1MB is used instead (which prevents regression from our current state).
Introduce a block minimum, the minimum will be 25% of the current maximum, calculated at the same time (that is, every 2016 blocks, at the same time the maximum is calculated). All blocks must be at least this size in order to be valid, for blocks that do not have enough transactions to meet the 25%, padding will be used. This devalues the incentive to mine empty blocks in either an attempt to deflate the block size, or to obtain a propagation advantage. Miners will be incentivized to include transactions, as the block must meet the minimum. This should ensure that even miners wishing to always mine the minimum are still confirming Bitcoin transactions.
At the block in which this is introduced the maximum would stay at 1MB for the subsequent 2016 blocks. With the minimum being enforced of 256KB.
Example:
Example: (Regression Prevention)
The Future:
I believe that the 1MB regression prevention might need to be changed in the future, to prevent a large mining population from continually deflating the block size (and keeping us at the 1MB limit).
For this, the hard limit could be changed in the future manually, through a process similar to the current one, though hopefully with far less urgency and hysteria.
Another option is to add an additional calculation, preventing the new maximum from being lower than 75% of the current maximum. This would substantially slow down a block-size deflation attack.
 Example of Block-Size Deflation Attack Prevention: * Average Block Size for the last 2016 blocks: 4MB * New Maximum: 8MB * New Minimum: 2MB * Average Block Size for the last 2016 blocks: 2MB * New Maximum: 6MB (2 * 200% = 4, 4< 75% of 8, So use 8 * .75 = 6) * New Minimum: 1.5MB 
This would provide a maximum growth of 200% per recalculation, but a maximum shrinkage of 75%.
Request For Comments:
I'd love to hear your thoughts. Why wouldn't this work? What portion is flawed? Will the miners support such a proposal? Would this even solve the block size issue?
I will note that I don't find the 100% and 25% to be hard and fast in my idea. Those we're just the values that initially jumped out at me. I could easily see the minimum being anything below 50% (above 50% and the network can never adjust to smaller block sizes). I could also see the maximum being anything over 100%.
I think the great part about this variable approach is that the network can adjust to address spikes in volume and readjust once those spikes dissipate.
Note to Mods: I know you've been fairly restrictive in topics along this nature. If you're going to remove this post, at least let me know.
submitted by wrayjustin to Bitcoin [link] [comments]

Variable Block Size Proposal

Hey Bitcoiners!
While I am an avid Bitcoin supporter, long-term user, and have done development work on tools and platforms surrounding Bitcoin, I have been very busy these past few weeks and haven't had a chance to fully (or closely) monitor the Block Size debate.
I'm familiar with the basics, and have read abstracts about the front-running proposals (BIP 100, 101, and 102). Though I've honestly not read those in depth either. With that said, I was driving yesterday and thought of a potential idea. I'll be clear, this is just an idea, and I haven't fully fleshed it out. But I thought I'd throw it out there and see what people thought.
My Goal:
Provide a variable block size that provides for sustainable, long-term growth, and balances the block propagation, while also being mindful of potential spam attacks.
The Proposal:
Every 2016 blocks (approximately every two weeks, at the same time the difficulty is adjusted), the new block size parameters are calculated.
The calculation determines the average (mean) size of the past 2016 blocks. This "average" size is then doubled (200%) and used as the maximum block size for the subsequent 2016 blocks. At any point, if the new maximum size is calculated to be below 1MB, 1MB is used instead (which prevents regression from our current state).
Introduce a block minimum, the minimum will be 25% of the current maximum, calculated at the same time (that is, every 2016 blocks, at the same time the maximum is calculated). All blocks must be at least this size in order to be valid, for blocks that do not have enough transactions to meet the 25%, padding will be used. This devalues the incentive to mine empty blocks in either an attempt to deflate the block size, or to obtain a propagation advantage. Miners will be incentivized to include transactions, as the block must meet the minimum. This should ensure that even miners wishing to always mine the minimum are still confirming Bitcoin transactions.
At the block in which this is introduced the maximum would stay at 1MB for the subsequent 2016 blocks. With the minimum being enforced of 256KB.
Example:
Example: (Regression Prevention)
The Future:
I believe that the 1MB regression prevention might need to be changed in the future, to prevent a large mining population from continually deflating the block size (and keeping us at the 1MB limit).
For this, the hard limit could be changed in the future manually, through a process similar to the current one, though hopefully with far less urgency and hysteria.
Another option is to add an additional calculation, preventing the new maximum from being lower than 75% of the current maximum. This would substantially slow down a block-size deflation attack.
 Example of Block-Size Deflation Attack Prevention: * Average Block Size for the last 2016 blocks: 4MB * New Maximum: 8MB * New Minimum: 2MB * Average Block Size for the last 2016 blocks: 2MB * New Maximum: 6MB (2 * 200% = 4, 4< 75% of 8, So use 8 * .75 = 6) * New Minimum: 1.5MB 
This would provide a maximum growth of 200% per recalculation, but a maximum shrinkage of 75%.
Request For Comments:
I'd love to hear your thoughts. Why wouldn't this work? What portion is flawed? Will the miners support such a proposal? Would this even solve the block size issue?
I will note that I don't find the 100% and 25% to be hard and fast in my idea. Those we're just the values that initially jumped out at me. I could easily see the minimum being anything below 50% (above 50% and the network can never adjust to smaller block sizes). I could also see the maximum being anything over 100%.
I think the great part about this variable approach is that the network can adjust to address spikes in volume and readjust once those spikes dissipate.
submitted by wrayjustin to bitcoinxt [link] [comments]

Variable Block Size Proposal | Justin M. Wray | Aug 29 2015

Justin M. Wray on Aug 29 2015:
-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA512
Hey Bitcoiners!
While I am an avid Bitcoin supporter, long-term user, and have done
development work on tools and platforms surrounding Bitcoin, I have
been very busy these past few weeks and haven't had a chance to fully
(or closely) monitor the Block Size debate.
I'm familiar with the basics, and have read abstracts about the
front-running proposals (BIP 100, 101, and 102). Though I've honestly
not read those in depth either. With that said, I was driving
the other day and thought of a potential idea. I'll be clear, this is
just an idea, and I haven't fully fleshed it out. But I thought I'd
throw it out there and see what people thought.
My Goal:
Provide a variable block size that provides for sustainable, long-term
growth, and balances the block propagation, while also being mindful
of potential spam attacks.
The Proposal:
Every 2016 blocks (approximately every two weeks, at the same time the
difficulty is adjusted), the new block size parameters are calculated.
The calculation determines the average (mean) size of the past 2016
blocks. This "average" size is then doubled (200%) and used as the
maximum block size for the subsequent 2016 blocks. At any point, if
the new maximum size is calculated to be below 1MB, 1MB is used
instead (which prevents regression from our current state).
Introduce a block minimum, the minimum will be 25% of the current
maximum, calculated at the same time (that is, every 2016 blocks, at
the same time the maximum is calculated). All blocks must be at least
this size in order to be valid, for blocks that do not have enough
transactions to meet the 25%, padding will be used. This devalues the
incentive to mine empty blocks in either an attempt to deflate the
block size, or to obtain a propagation advantage. Miners will be
incentivized to include transactions, as the block must meet the
minimum. This should ensure that even miners wishing to always mine
the minimum are still confirming Bitcoin transactions.
At the block in which this is introduced the maximum would stay at 1MB
for the subsequent 2016 blocks. With the minimum being enforced of 256KB
.
Example:
* Average Block Size for the last 2016 blocks: 724KB * New Maximum: 1448KB * New Minimum: 362KB 
Example: (Regression Prevention)
* Average Block Size for the last 2016 blocks: 250KB * New Maximum: 1MB * New Minimum: 256KB 
The Future:
I believe that the 1MB regression prevention might need to be changed
in the future, to prevent a large mining population from continually
deflating the block size (and keeping us at the 1MB limit).
For this, the hard limit could be changed in the future manually,
through a process similar to the current one, though hopefully with
far less urgency and hysteria.
Another option is to add an additional calculation, preventing the new
maximum from being lower than 75% of the current maximum. This would
substantially slow down a block-size deflation attack.
Example of Block-Size Deflation Attack Prevention:
This would provide a maximum growth of 200% per recalculation, but a
maximum shrinkage of 75%.
Request For Comments:
I'd love to hear your thoughts. Why wouldn't this work? What portion
is flawed? Will the miners support such a proposal? Would this even
solve the block size issue?
I will note that I don't find the 100% and 25% to be hard and fast in
my idea. Those we're just the values that initially jumped out at me.
I could easily see the minimum being anything below 50% (above 50% and
the network can never adjust to smaller block sizes). I could also see
the maximum being anything over 100%. Lastly, if a inflation attack
is a valid concern, a hard upper limit could be set (or the historical
32MB limit could remain).
I think the great part about this variable approach is that the
network can adjust to address spikes in volume and readjust once those
spikes dissipate.
Thanks!
Justin M. Wray
-----BEGIN PGP SIGNATURE-----
Comment: GPGTools - https://gpgtools.org
iQIcBAEBCgAGBQJV4UXvAAoJENo/Q5Xwcn83ZWEP/iXAlNk5p9OlOPNSoHkECcxe
AcartxMLrmOvAZVudU4+239TEvwPydmYX/ptmBYgrvRJfm/TWmi0ZbTioxbxTIWM
IlNta1Y8IOHOEgBCtSW01j1PFHIzkBHQGIuqrKHhjcNVGbegXlPm3Da0gjNuTBIe
IV58gf1OfYK2XjuCMQMvo3VyXUKhqbOvBNnZXr+Qo2sAtanmxHQ+TU/gjA02L9LO
bb8WqQDj/veGnMexGh/X58tfQ5KCfLO401F7KnConDaFdKVDikp32zaSXZ7JWf/K
OeseHW1OHHVdYpHvh5VG5GLtYYB5rnq8g7B0/kyx5n4ldB6GkLxzH9CPB0vxpMnZ
dVCS/+EUe/wkHrpRVNhMwP8XfG+8gv9upKg6H/u39XmpL2H2G4cKeot5xRiWRNqY
oJclAeIhDTL1bx/9e/VqvM91ESWpBLs+O8Mh9OzgfbN3gKR6BuoWHNwM9jSMDAT1
YzwdneSvAEFzgELMlae2QIzAUHno9qkHMkDVbdY3bBtSM9Xz4ditGgnq1D40ZZ+J
zx5WVY7HCebgbk7T35xgKzSKQSEG9zFNW5Dvq66Se3Zpc5vCPw7Q2xwjjPz3zdXQ
Lub0ohVWTzKr05tN1e/nu6keiY5cXRZ0w2MtHb19jtdWyoHEWWHanfOZjgbVSsuA
saFCydA7O4E4BFxgtNze
=JthX
-----END PGP SIGNATURE-----
original: http://lists.linuxfoundation.org/pipermail/bitcoin-dev/2015-August/010708.html
submitted by bitcoin-devlist-bot to bitcoin_devlist [link] [comments]

Facilitating Discussion of 0.9.0 FINAL of Bitcoin Core (aka Bitcoin QT)

To facilitate a detailed discussion of some of the finer points of this update, I added numbering to each bullet in release notes, and also posted it to RapGenius, where people can annotate it if they'd like.
I'm not a programmer, but I'm curious to hear what programmers and other people smarter than me have to say about all the new changes.
http://rapgenius.com/The-bitcoin-dev-team-bitcoin-090-final-lyrics
EDIT1 : Doh! Reddit detroyed all the formatting and now i'm on baby duty so can't fix it. EDIT 2: Nap time! Just fixed the formatting :)
---- 0.9.0 RELEASE NOTES ----
Part 1. RPC:
1.1 - New notion of 'conflicted' transactions, reported as confirmations: -1
1.2 - 'listreceivedbyaddress' now provides tx ids
1.3 - Add raw transaction hex to 'gettransaction' output
1.4 - Updated help and tests for 'getreceivedby(account|address)'
1.5 - In 'getblock', accept 2nd 'verbose' parameter, similar to getrawtransaction, but defaulting to 1 for backward compatibility
1.6 - Add 'verifychain', to verify chain database at runtime
1.7 - Add 'dumpwallet' and 'importwallet' RPCs
1.8 - 'keypoolrefill' gains optional size parameter
1.9 - Add 'getbestblockhash', to return tip of best chain
1.10 - Add 'chainwork' (the total work done by all blocks since the genesis block) to 'getblock' output
1.11 - Make RPC password resistant to timing attacks
1.12 - Clarify help messages and add examples
1.13 - Add 'getrawchangeaddress' call for raw transaction change destinations
1.14 - Reject insanely high fees by default in 'sendrawtransaction'
1.15 - Add RPC call 'decodescript' to decode a hex-encoded transaction script
1.16 - Make 'validateaddress' provide redeemScript
1.17 - Add 'getnetworkhashps' to get the calculated network hashrate
1.18 - New RPC 'ping' command to request ping, new 'pingtime' and 'pingwait' fields in 'getpeerinfo' output
1.19 - Adding new 'addrlocal' field to 'getpeerinfo' output
1.20 - Add verbose boolean to 'getrawmempool'
1.21 - Add rpc command 'getunconfirmedbalance' to obtain total unconfirmed balance
1.22 - Explicitly ensure that wallet is unlocked in importprivkey
1.23 - Add check for valid keys in importprivkey
Part 2. Command-line options:
2.1 - New option: -nospendzeroconfchange to never spend unconfirmed change outputs
2.2 - New option: -zapwallettxes to rebuild the wallet's transaction information
2.3 - Rename option '-tor' to '-onion' to better reflect what it does
2.4 - Add '-disablewallet' mode to let bitcoind run entirely without wallet (when built with wallet)
2.5 - Update default '-rpcsslciphers' to include TLSv1.2
2.6 - make '-logtimestamps' default on and rework help-message
2.7 - RPC client option: '-rpcwait', to wait for server start
2.8 - Remove '-logtodebugger'
2.9 - Allow -noserver with bitcoind
Part 3. Block-chain handling and storage:
3.1 - Update leveldb to 1.15
3.2 - Check for correct genesis (prevent cases where a datadir from the wrong network is accidentally loaded)
3.3 - Allow txindex to be removed and add a reindex dialog
3.4 - Log aborted block database rebuilds
3.5 - Store orphan blocks in serialized form, to save memory
3.6 - Limit the number of orphan blocks in memory to 750
3.7 - Fix non-standard disconnected transactions causing mempool orphans
3.8 - Add a new checkpoint at block 279,000
Part 4. Wallet:
4.1 - Bug fixes and new regression tests to correctly compute the balance of wallets containing double-spent (or mutated) transactions
4.2 - Store key creation time. Calculate whole-wallet birthday
4.3 - Optimize rescan to skip blocks prior to birthday
4.4 - Let user select wallet file with -wallet=foo.dat
4.5 - Consider generated coins mature at 101 instead of 120 blocks
4.6 - Improve wallet load time
4.7 - Don't count txins for priority to encourage sweeping
4.8 - Don't create empty transactions when reading a corrupted wallet
4.9 - Fix rescan to start from beginning after importprivkey
4.10 - Only create signatures with low S values
Part 5. Mining:
5.1 - Increase default -blockmaxsize/prioritysize to 750K/50K
5.2 - 'getblocktemplate' does not require a key to create a block template
5.3 - Mining code fee policy now matches relay fee policy
Part 6. Protocol and network:
6.1 - Drop the fee required to relay a transaction to 0.01mBTC per kilobyte
6.2 - Send tx relay flag with version
6.3 - New 'reject' P2P message (BIP 0061, see https://gist.github.com/gavinandresen/7079034 for draft)
6.4 - Dump addresses every 15 minutes instead of 10 seconds
6.5 - Relay OP_RETURN data TxOut as standard transaction type
6.6 - Remove CENT-output free transaction rule when relaying
6.7 - Lower maximum size for free transaction creation
6.8 - Send multiple inv messages if mempool.size > MAX_INV_SZ
6.9 - Split MIN_PROTO_VERSION into INIT_PROTO_VERSION and MIN_PEER_PROTO_VERSION
6.10 - Do not treat fFromMe transaction differently when broadcasting
6.11 - Process received messages one at a time without sleeping between messages
6.12 - Improve logging of failed connections
6.13 - Bump protocol version to 70002
6.14 - Add some additional logging to give extra network insight
6.15 - Added new DNS seed from bitcoinstats.com
Part 7. Validation:
7.1 - Log reason for non-standard transaction rejection
7.2 - Prune provably-unspendable outputs, and adapt consistency check for it
7.3 - Detect any sufficiently long fork and add a warning
7.4 - Call the -alertnotify script when we see a long or invalid fork
7.5 - Fix multi-block reorg transaction resurrection
7.6 - Reject non-canonically-encoded serialization sizes
7.7 - Reject dust amounts during validation
7.8 - Accept nLockTime transactions that finalize in the next block
Part 8. Build system:
8.1 - Switch to autotools-based build system
8.2 - Build without wallet by passing --disable-wallet to configure, this removes the BerkeleyDB dependency
8.3 - Upgrade gitian dependencies (libpng, libz, libupnpc, boost, openssl) to more recent versions
8.4 - Windows 64-bit build support
8.5 - Solaris compatibility fixes
8.6 - Check integrity of gitian input source tarballs
8.7 - Enable full GCC Stack-smashing protection for all OSes
Part 9. GUI:
9.1 - Switch to Qt 5.2.0 for Windows build
9.2 - Add payment request (BIP 0070) support
9.3 - Improve options dialog
9.4 - Show transaction fee in new send confirmation dialog
9.5 - Add total balance in overview page
9.6 - Allow user to choose data directory on first start, when data directory ismissing, or when the -choosedatadir option is passed
9.7 - Save and restore window positions
9.8 - Add vout index to transaction id in transactions details dialog
9.9 - Add network traffic graph in debug window
9.10 - Add open URI dialog
9.11 - Add Coin Control Features
9.12 - Improve receive coins workflow: make the 'Receive' tab into a form to request payments, and move historical address list functionality to File menu
9.13 - Rebrand to Bitcoin Core
9.14 - Move initialization/shutdown to a thread. This prevents "Not responding" messages during startup. Also show a window during shutdown
9.15 - Don't regenerate autostart link on every client startup
9.16 - Show and store message of normal bitcoin:URI
9.17 - Fix richtext detection hang issue on very old Qt versions
9.18 - OS X: Make use of the 10.8+ user notification center to display Growl-like notifications
9.19 - OS X: Added NSHighResolutionCapable flag to Info.plist for better font rendering on Retina displays
9.20 - OS X: Fix bitcoin-qt startup crash when clicking dock icon
9.21 - Linux: Fix Gnome bitcoin: URI handler
Part 10. Miscellaneous:
10.1 - Add Linux script (contrib/qos/tc.sh) to limit outgoing bandwidth
10.2 - Add '-regtest' mode, similar to testnet but private with instant block generation with 'setgenerate' RPC
10.3 - Add 'linearize.py' script to contrib, for creating bootstrap.dat
10.4 - Add separate bitcoin-cli client
submitted by WhiteyFisk to Bitcoin [link] [comments]

The problem lies with the Chinese

The root of the problem comes because of the Chinese government's power. Its censorship is affecting a world wide network.
Why should we bow to Chinese miners, not willing to go up above 8mb?
Let me quote Mike Hearn
BIP 101 originally started with a 20mb limit+growth. That was based on some calculations Gavin did. At that point the Chinese miners started saying they couldn't accept 20 because of the firewall, but eight would be OK. They put announcements of their support for eight megabyte blocks in their coinbases, etc. Why eight? Because it's a Chinese homonym for "prosper" or "wealth": https://en.wikipedia.org/wiki/Numbers_in_Chinese_culture#Eig...
The Chinese have centralized mining in every aspect.
They have centralized hash power, and they've centralized the decision to increase the network capacity due to the political limitations imposed on them.
The Chinese I'm sure are very resourceful and creative people.
If we tell them, "sorry guys, you need to figure your shit out" and a portion of them go away, that's great, because then a lot of average joes can compete and mining gets decentralized, as it should be.
Wouldn't we be better of if we went to the 20Mb limit and have Bitcoin ready to play with the big boys once and for all?
With a tx size of 500 bytes on average (3tx/sec), 20Mb would give us 40,000 tx/block -> 4,000 tx/min -> 60~66 tx/sec.
If the day has 86,400 seconds, that's a whooping 5.7MM tx/day.
This is still several orders of magnitude smaller than VISA alone (ideally we'd like to have a p2p financial system that would make the others obsolete, and that also includes cash) should be enough to let us build lots of on-chain innovation, to seriously start accepting Bitcoin payments in retail, to not just be an experiment.
Such capacity would encourage even more transactional volume, with more volume, the harder it becomes to manipulate the Bitcoin market as more people are involved both buying and selling it, and when that happens, Bitcoin starts becoming a stable unit of currency and then you start having people seriously thinking: "Why the hell do I need a bank to save my money?"
This is why I believe banks are behind all this small block proposals, this is their fear, that Bitcoin gets out of its experimental phase.
Now add to a 20Mb block improvements like SegWhit and you have the equivalent of waaaaay more transactions per block, then banks are in trouble.
We can do this, remember the internet won't always be this slow, we're rolling out 100 Gigabit ethernet upgrades in the cloud, we're rolling out 5G wireless networks with gigabit speeds for phones and gigabit internet will start getting deployed faster and faster all over the world for homes.
We can do this.
submitted by gubatron to btc [link] [comments]

Bitcoin-QT 0.9 disponível para download

The Core Developers of Bitcoin released the 0.9.0 FINAL of Bitcoin Core (aka Bitcoin QT).
DOWNLOAD:
This is a Final Version, but its the same as 0.9.0rc3
Sources: https://github.com/bitcoin/bitcoin/releases http://sourceforge.net/projects/bitcoin/files/Bitcoin/bitcoin-0.9.0/ https://bitcoin.org/bin/0.9.0/README.txt
Bitcoin Core version 0.9.0 is now available from:
https://bitcoin.org/bin/0.9.0/
This is a release candidate for a new major version. A major version brings both new features and bug fixes.
Please report bugs using the issue tracker at github:
https://github.com/bitcoin/bitcoin/issues

How to Upgrade

If you are running an older version, shut it down. Wait until it has completely shut down (which might take a few minutes for older versions), uninstall all earlier versions of Bitcoin, then run the installer (on Windows) or just copy over /Applications/Bitcoin-Qt (on Mac) or bitcoind/bitcoin-qt (on Linux).
If you are upgrading from version 0.7.2 or earlier, the first time you run 0.9.0 your blockchain files will be re-indexed, which will take anywhere from 30 minutes to several hours, depending on the speed of your machine.
On Windows, do not forget to uninstall all earlier versions of the Bitcoin client first, especially if you are switching to the 64-bit version.

Windows 64-bit installer

New in 0.9.0 is the Windows 64-bit version of the client. There have been frequent reports of users running out of virtual memory on 32-bit systems during the initial sync. Because of this it is recommended to install the 64-bit version if your system supports it.
NOTE: Release candidate 2 Windows binaries are not code-signed; use PGP and the SHA256SUMS.asc file to make sure your binaries are correct. In the final 0.9.0 release, Windows setup.exe binaries will be code-signed.

OSX 10.5 / 32-bit no longer supported

0.9.0 drops support for older Macs. The minimum requirements are now: * A 64-bit-capable CPU (see http://support.apple.com/kb/ht3696); * Mac OS 10.6 or later (see https://support.apple.com/kb/ht1633).

Downgrading warnings

The 'chainstate' for this release is not always compatible with previous releases, so if you run 0.9 and then decide to switch back to a 0.8.x release you might get a blockchain validation error when starting the old release (due to 'pruned outputs' being omitted from the index of unspent transaction outputs).
Running the old release with the -reindex option will rebuild the chainstate data structures and correct the problem.
Also, the first time you run a 0.8.x release on a 0.9 wallet it will rescan the blockchain for missing spent coins, which will take a long time (tens of minutes on a typical machine).

Rebranding to Bitcoin Core

To reduce confusion between Bitcoin-the-network and Bitcoin-the-software we have renamed the reference client to Bitcoin Core.

Autotools build system

For 0.9.0 we switched to an autotools-based build system instead of individual (q)makefiles.
Using the standard "./autogen.sh; ./configure; make" to build Bitcoin-Qt and bitcoind makes it easier for experienced open source developers to contribute to the project.
Be sure to check doc/build-*.md for your platform before building from source.

Bitcoin-cli

Another change in the 0.9 release is moving away from the bitcoind executable functioning both as a server and as a RPC client. The RPC client functionality ("tell the running bitcoin daemon to do THIS") was split into a separate executable, 'bitcoin-cli'. The RPC client code will eventually be removed from bitcoind, but will be kept for backwards compatibility for a release or two.

walletpassphrase RPC

The behavior of the walletpassphrase RPC when the wallet is already unlocked has changed between 0.8 and 0.9.
The 0.8 behavior of walletpassphrase is to fail when the wallet is already unlocked:
> walletpassphrase 1000 walletunlocktime = now + 1000 > walletpassphrase 10 Error: Wallet is already unlocked (old unlock time stays) 
The new behavior of walletpassphrase is to set a new unlock time overriding the old one:
> walletpassphrase 1000 walletunlocktime = now + 1000 > walletpassphrase 10 walletunlocktime = now + 10 (overriding the old unlock time) 

Transaction malleability-related fixes

This release contains a few fixes for transaction ID (TXID) malleability issues:

Transaction Fees

This release drops the default fee required to relay transactions across the network and for miners to consider the transaction in their blocks to 0.01mBTC per kilobyte.
Note that getting a transaction relayed across the network does NOT guarantee that the transaction will be accepted by a miner; by default, miners fill their blocks with 50 kilobytes of high-priority transactions, and then with 700 kilobytes of the highest-fee-per-kilobyte transactions.
The minimum relay/mining fee-per-kilobyte may be changed with the minrelaytxfee option. Note that previous releases incorrectly used the mintxfee setting to determine which low-priority transactions should be considered for inclusion in blocks.
The wallet code still uses a default fee for low-priority transactions of 0.1mBTC per kilobyte. During periods of heavy transaction volume, even this fee may not be enough to get transactions confirmed quickly; the mintxfee option may be used to override the default.

0.9.0 Release notes

RPC:
Command-line options:
Block-chain handling and storage:
Wallet:
Mining:
Protocol and network:
Validation:
Build system:
GUI:
submitted by allex2501 to BrasilBitcoin [link] [comments]

Figure out the price of your ALTCOIN Calculator The Bitcoin Fluid Dispenser II - Quick Demonstration Bitcoin / altcoin tax 101 - Capital gains and first-in first-out count my crypto - extension calculator WHAT YOU NEED TO KNOW NOW! BITCOIN BIP 148

Digital money that’s instant, private, and free from bank fees. Download our official wallet app and start using Bitcoin today. Read news, start mining, and buy BTC or BCH. BIP is a standard for proposing changes to the Bitcoin protocol, either as soft or hard fork protocol upgrades or other changes. There are different Bitcoin Forks, a codebase fork, that is a copy of the code and normally results in a whole new cryptocurrency, a blockchain fork, which brings two versions of the transaction history. Or a hard fork, a protocol upgrade that loosens or removes ... Some of the large Bitcoin mining pools have already implemented support for larger Bitcoin blocks, but there is still no consensus among all players in the Bitcoin ecosystem about the matter. We too believe that Bitcoin needs to start using larger blocks in the future in order to be able to scale up with the increased usage, but as to how this should be implement is a different story however. Segwit stands for Segregated Witness and was introduced in BIP-0141 (Bitcoin Improvement Proposal 141). The size of a Segwit transaction is lower than the size of a legacy Bitcoin transaction, as Segwit reduces the overall size of the transaction. The difference lies mainly in the size of inputs, as the outputs are roughly the same in size - 1-2 bytes smaller when using Segwit. Thus, having a ... bip 101 Early Bitcoin users are spoiled. Having used a decentralized, secure, fast, cheap and unlimited network for a few years, they are not willing to concede on any of these five features.

[index] [17714] [46233] [10139] [14938] [45023] [28148] [33096] [28487] [29359] [49866]

Figure out the price of your ALTCOIN Calculator

genesis mining calculator genesis mining bitcoin genesis mining zcash genesis mining genesis mining promo code genesis mining roi genesis mining strategy genesis mining 2017 how genesis mining ... Bitcoin / altcoin tax 101 - Capital gains and first-in first-out - Duration: ... Trading Bitcoin: 4 Steps to Calculate Your Position Size - Risk Management EXPLAINED - Duration: 6:20. Sell The ... Bitcoin wallets 101 99Bitcoins. Loading... Unsubscribe from 99Bitcoins? ... How To Make A Secure BIP 38 Encrypted Bitcoin Paper Wallet - TUTORIAL - Duration: 13:38. MrJayBusch 117,543 views. 13:38 ... Crypto 101 - How To Calculate Market Cap - You MUST KNOW THIS! - Duration: 6:01. Crypto Cory 1,886 views. 6:01. $100 A Day Trading On Binance - Cryptocurrency Trading For Beginners - Duration: 13 ... Bitcoin 101 - The Nightmare of a 51% Attack - Part 1 - Calculating the Costs - Duration: 16:12. CRI Recommended for you. 16:12. ...

#