If you know how to setup NiceHash on Linux, you can get it going on any cloud hosting provider. I would like to emphasise that normal cloud hosting servers will not work for this as they have no graphics cards. So you're better off using servers normally used for AI (artificial intelligence) or ML (machine learning) as both require GPU power. Another thing is, with cloud hosting such as Google or AWS, they charge you for what you use. So if you aren't careful, you could get charged well over a thousand dollars every month as Google & AWS will provide more computing/gpu power when your server is close to maxing out. This is done because nornally, people or companies that go with Google or AWS have the money to pay for it and usually never want their services to go down. If I ran a serivce on a pre-determined plan, like 30 GB ram, when it maxes out, server stops. When the server stops, my service stops which loses me money. With most cloud hosting companies, the servers will never stop and you can scale quite efficiently. Lastly, mining bitcoins as a hobby or as a job on cloud servers isn't profitable at all. You will end up spending more money on cloud hosting than you get in bitcoins. My advice? Don't try mining for bitcoins using NiceHash or anything on Google Cloud Platform using the free account which is breaching their Terns of Service. Create a few accounts with other hosting companies such as AWS, IBM, Oracle, Alibaba Cloud and use their free plans to test out mining bitcoins on cloud hosting. If you like it, use the free AWS EC2 instance you get (free forever with limited use) and mine away. Or look for other alternative cloud hosting companies that are a lot cheaper but gives the same results. Or better yet get an ASIC miner. It requires a bigger initial investment, but it will pay for itself in the long run.
"How a bug in Visual Studio 2015 exposed my source code on GitHub and cost me $6,500 in a few hours" ~ Bots continuously scan github source code looking for exposed amazon access keys which they use to spawn large numbers of EC2 instances to mine on someone else's dime...
From Platform-based Token to the Public Chain, Will CoinEx Embrace a Paradigm Shift?
The platform-based tokens shine in 2019, but such prosperity does not cover the disadvantage of their single use. How to find new application scenarios in addition to repurchase and destruction, and transaction fee deduction? The answer given by Binance is to expand the ecosystem of the public chain and develop the platform token into a public-chain token in a broader sense like ETH. Not long ago, CoinEx announced its plan to launch a public chain. The CET will not just be a token listed on the platform, but also the basic token in the ecosystem of public chains. Unlike the Binance Chain whose partners serve as its nodes, CoinEx Chain chooses nodes according to the votes of ordinary users. Obviously, this is another paradigm shift for the platform-based tokens to expand the application scenarios. CoinEx Chain is a public chain created by CoinEx’s professional blockchain underlying R&D team for DEX. Different from other DEXs, CoinEx uses three public chains: DEX public chain, Smart public chain and Privacy public chain, three of which parallel each other. They focus on transactions, smart contracts, and privacy respectively, and interoperate through “IBC protocols”. How to get involved in CoinEx Chain’s ecosystem? A detailed interpretation of the CoinEx DEX’s public-chain node recruitment is provided below. How to participate in the CET nodes election? CoinEx’s nodes election rules are simple: Any holder who stakes at least 5 million CET on the chain is qualified, and the first 42 spots in the rankings will automatically be valid validators entitled to the right to generate a block and share proceeds. It should be noted that the process of electing a node is continuous and each block will be ranked. Responsibilities of validators include preventing double signing and DDos attacks, being online all the time, upgrading nodes and configuration, building the private key storage architecture, and participating in community governance. Besides, there are server hardware requirements for running a node as below: https://preview.redd.it/qhqk6uliftt31.png?width=1366&format=png&auto=webp&s=02addf13f8d9e619b70ba75e3a6eef2f1313e6f9 After the mainnet is online (expected in early November), the CET withdrawn from CoinEx can be staked on the chain. Once completed, the staking can be canceled at any time, but it takes 21 days for the CET to return to the account. Private investors holding less than 5 million CET will be entitled to the voting power in the election of validators and receive bonus as rewards. How are the returns on being a CET validator? With a study on CoinEx’s node return model, you may find returns on validators mainly come from two parts, respectively, the block reward and transaction fee. The transaction fee includes the gas fee in the usual sense and the function fee. Relevant gas fees will be charged for any transaction initiated on the chain, and the corresponding function fee will be charged for special operations on the DEX chain. For example, equivalent to a DEX broker, a node will charge users for such operations as order matching, token issuing, trading pairs creating, automated market making with Bancor and address alias setting. In terms of block rewards, the CoinEx Foundation will provide a total of 315 million CET for five consecutive years. To be specific, it will send out about 105 million CET in the first year and 10 CET for block rewards. Similar to the bitcoin design, block rewards will gradually decrease over time, yet at various levels of frequency. Every year 2 CET will be deducted from the reward for each block. https://preview.redd.it/tmocf00lftt31.png?width=1566&format=png&auto=webp&s=e68bed2c3513e4665a2101229a0d781ff31f53f5 The basic data of CoinEx is shown in the figure below. According to this condition, the estimated annual income of transaction fee for CoinEx’s validators comes at around 38 million CET, and, if calculated at 50% for the staking rate of the whole network, the annualized rate of return for CoinEx’s validators is 10%. That is to say, in a case of successful re-election of CoinEx’s validators, the basic token-standard return rate will be around 10% for the first year. This figure will be higher due to the relatively small total stakes in the beginning. How to calculate the actual income of the year? Here we’ve summarized a calculation formula where numbers can be quickly inserted for your reference. Suppose the total stakes on a node are a, p% of which is the CET staked by the node itself and q% of which is CET entrusted to be staked by retail traders, the total stakes of the whole network are b, the actual returns distributed by the whole network are c, and the commission ratio of the node is k, then the actual income of the validator for the year is ac(p%+kq%)/b. For example. Suppose the total stakes at a node are 10 million CET, including 8 million CET staked by the node itself and 2 million CET staked by ordinary CET holders and the commission ratio of the node is 10%. Calculated with the total stakes of the whole network being 1 billion CET and the actual returns distributed being 150 million CET, the actual income of the validator for the year is 1.23 million CET. In such conditions, the annualized rate of return for CET is around 15.3%. So we can see that the actual income of the CoinEx’s validators can be divided into two parts in terms of asset ownership: incomes from CET staked by the node itself and commissions from CET staked by ordinary holders. https://preview.redd.it/4ghx0sloftt31.png?width=634&format=png&auto=webp&s=7b8df5a18cc8033c77473017cee7182f1c080c8b In other words, if a validator can keep the CET public chain in safety, contribute to the development of CoinEx’s ecosystem, and help it gain more attention and favor from ordinary users, it can receive an annualized income that is higher than the basic staking income. Retail users may stake their CET on more professional and responsible nodes, as well as sharing the dividends of the node and the CET public chain. In the nodes election, the Matthew effect has always been a topic of criticism. So will ordinary token holders drive the centralization of validators according to CoinEx’s rules? The answer is no. Yet just as in the case with all other PoS models, inevitable is moderate centralization, or in other words, the trade-off between decentralization and centralization. That is because, at least mathematically, the annual income from CET staked by retail traders on different validators relies on k, which is the commission ratio of the node, with a and q% of retail traders holding the same amount of CET remaining the same. That is to say, in terms of economic efficiency alone, the income of the retail trader’ votes for different nodes does not depend on the scale, but on the proportion of transaction fee and more implicit reasons such as the security and reliability (or reputation) of a node. There are many other public chains adopting the “Supernodes” election, and what are the advantages and disadvantages of CoinEx? There are many public chains adopting such “Supernodes” election mechanism, among which EOS and IOST are best known. So what are the similarities and differences in the nodes election between CoinEx and its counterparts? From the perspective of the nodes election, IOST needs 2.1 million votes (one vote for one token). According to the price of 0.0044 US dollars when this document is published, it costs at least USD 9,300, a really low threshold. Blocks.io shows that EOS now requires about 290 million votes (30 votes for one token) for the top 21 supernodes. According to EOS REX’s data, if a consortium without a user base wants to get a block-generating right by renting tokens, it will cost around USD 2.55 million a year, approximately RMB 18 million. By contrast, the threshold for a CoinEx Chain’s node is only 5 million CET, a moderate cost of USD 100,000 approximately estimated at USD 0.02. In terms of hardware, according to the hardware configuration mentioned above, it costs USD 1,000 per year. The estimated operating cost of AWS for t3.xlarge is USD 1,458 per year, and one master with a backup costs only USD 2,916 a year. (The specific data will change slightly in practice.) Take the recommended server for running a node when EOS officially announced its node election. It uses Amazon AWS EC2 host x1.32x Large, with 128-core processor, 2TB memory, 2x1920GB SSD storage space and 25Gb network bandwidth. The operating cost of such a server, with one master and one backup, is: 13.338*24*2 = USD 640 a day. (The bandwidth cost allocated to the day is negligible.) It is thus obvious that CoinEx costs less, avoiding the waste arising from servers such as EOS and thus eliminating the intangible cost. From the number of nodes, CoinEx Chain has 42 validators, EOS has 21 block-generating nodes per round, and IOST has 63. CoinEx Chain stays in the middle of the decentralization-and-efficiency trade-offs. In addition, the estimated hardware cost of the CET node election is USD 1,000 a year, which is relatively low. Overall, CoinEx Chain’s nodes election is designed in a reasonable way, which is destined to be a milestone for CoinEx. Once “trade-driven mining” at CoinEx and it has even gone through “repurchase and destruction”. Now it targets the DEX public chain, which is deemed as a paradigm shift that lifts CET out of the pattern of being platform-based tokens. Let’s look forward to its future development. Follow CoinEx Chain on Social Channels: Twitter: https://twitter.com/CoinExChain Facebook: https://www.facebook.com/CoinExChainOfficial/ Telegram: https://t.me/CoinExChainOrg
Hi all. I'm a long-time automated trader in traditional futures markets and a few years ago I became interested in Bitcoin and mined some coin back when GPU mining was feasible. Recently I created a BitMEX account and started recording data. I discovered that several of my existing systems could be adapted to work on XBTUSD, and that one was very profitable on paper. I've since written the code necessary to run it live. It's largely a liquidity provision strategy and it needs to join (and stay at) the best bid or offer with a single order when it has a signal. This is of course in order to receive a rebate if hit, as opposed to paying commish on every trade. The strategy is not complicated in terms of execution, it does not maintain multiple open orders, or layer the book in any fashion at all. It only cancels when my signal reverses. I quickly discovered however that on BitMEX, even when I wasn't receiving the dreaded "overloaded" message, simply getting an order up took 2-4 seconds for the API request to complete. Yes, I am using an ec2 instance that is half a millisecond, ping-wise, from the closest IP address on the multi-homed www.bitmex.com, yes I am using a keep-alive connection that I make sure to keep alive. Ignoring overloads for the moment, the issue is that even if I can get an order up, say at the bid, by the time the order is live, the market has often moved a few ticks away from me, which means I now need to move my order, but by the time it moves I have the same problem again. Then on top of this you have the overloads (probably caused in part by myself and others chasing the market precisely when it starts to move). I decided to measure over time exactly how long API requests were taking, and how often I was getting an "overloaded" reply, such that I might build a statistical expectation of what to look for to either indicate that it's a good time to trade, or that I should just shut my system down. I started sending orders every 10 seconds, and measuring how long the API request took to complete, and recording when I got an "overloaded" message. The following chart illustrates results for the past hour, today, shortly before I composed this message: https://i.imgur.com/a3PYZP4.png If the observation is below 0 then it is an "overload" otherwise it is the time the order request took to complete. The chart represents a total of 328 observations, 197 (60%) of which are overloads. This means that for every 5 orders that I attempt to place or update, I can guess that I will receive an overload reply 3 times. Autocorrelation on overloads is also high -- if I just got an overload message on an API request, the probability that my next API request will receive an overload reply is 83%. For API requests that go through, the average responds time is 2.15 seconds, with only 19 of my 131 good requests completing in less than a second. Now, even if I could trade and wasn't getting overloads more than half the time, an order taking more than a couple of milliseconds to go up is really unacceptable. Trading on a "pro-sumer" FCM at CME or CBOT with a VPS and crappy software like NT gets you a millisecond or two between your signal and having an order live, and your order is passing your broker's pre-trade risk and all that first. The really whack thing is that during periods of complete overload BitMEX is still doing plenty of trades, as evidenced by the trades feed, which means that (precluding people having special access) either those trades are exclusively "Close" trades execInst trades with no quantity specified, or people are smashing the API in hopes of having one of their many orders get through. If the former, such "overload" should quickly resolve (as opposed to lasting for 15+) minutes. If the latter, those people are certainly the cause of this problem and there should be a negative incentive for them to continue such behavior. As things stand, there's no way I can trade on BitMEX, although I would like to be able to. Edit: Another problem I didn't mention in my original draft is how long orders that are accepted by the API take to show a status of "working" via the WS feed. In my testing this can be around 10 seconds on average during times of high load. Interestingly, BitMEX seems to have a dead order sweeper that runs 13 seconds after an order is accepted, in the case of an order that was accepted but can't be executed or put on the book.
There's a pretty interesting debate in the AI space right now on whether FPGAs or ASICs are the way to go for hardware-accelerated AI in production. To summarize, it's more about how to operationalize AI - how to use already trained models with millions of parameters to get real-time predictions, like in video analysis or complex time series models based on deep neural networks. Training those AI models still seems to favor GPUs for now. Google seem to be betting big on ASICs with their TPU. On the other hand, Microsoft and Amazon seem to favor FPGAs. In fact Microsoft have recently partnered with Xilinx to add FPGA co-processors on half of their servers (they were previously only using Intel's Altera). The FPGA is the more flexible piece of hardware but it is less efficient than an ASIC, and have been notoriously hard to program against (though things are improving). There's also a nice article out there summarizing the classical FPGA conundrum: they're great for designing and prototyping but as soon as your architecture stabilizes and you're looking to ramp up production, taking the time to do an ASIC will more often be the better investment. So the question (for me) is where AI inference will be in that regard. I'm sure Google's projects are large scale enough that an ASIC makes sense, but not everyone is Google. And there is so much research being done in the AI space right now and everyone's putting out so many promising new ideas that being more flexible might carry an advantage. Google have already put out three versions of their TPUs in the space of two years Which brings me back to Xilinx. They have a promising platform for AI acceleration both in the datacenter and embedded devices which was launched two months ago. If it catches on it's gonna give them a nice boost for the next couple of years. If it doesn't, they still have traditional Industrial, Aerospace & Defense workloads to fall back on... Another wrinkle is their SoCs are being used in crypto mining ASICs like Antminer, so you never know how that demand is gonna go. As the value of BTC continues to sink there is constant demand for more efficient mining hardware, and I do think cryptocurrencies are here to stay. While NVDA has fallen off a cliff recently due to excess GPU inventory, XLNX has kept steady. XLNX TTM P/E is 28.98 Semiconductors - Programmable Logic industry's TTM P/E is 26.48 Thoughts?
I know this is a dumb question...but is there anyway to have a little go at mining without buying a massive kit first?
As a new zealander, there isn't an easy way to buy bitcoins (need int'l bank transfers etc) so I wondered if I could set up a machine to mine just little and then figure out whether to scale up......would really love to get involved, but my technical know-how is severely lacking
As many of you already know, there is a weekly AMA (ask me anything) on Telegram/Wechat group every Saturday, from 7–8 PM PST. This is the summary for AMA of last week. We are always happy to take feedback and answer your questions, see you all this Saturday! Part 1: Marketing Questions
Q: Do you have some good news to share with us? A: We successfully organized two meetups in Singapore and attended the Blockchain Connect Conference in Silicon Valley this week. And on July 4th, we will unlock tokens, and the circulating supply will be 770 million at that time. What’s more, We will launch our public testnet V2.0 and announce new partners before mid-July. All development and marketing plans are on track.
Q: Zilliqa has just announced testnet V2.0 with 1000 nodes (4 shards) and a lot of exciting features. I know QuarkChain only have roughly 100 nodes in coming public testnet. How do you compete with Zilliqa regarding nodes? What do you think about new features they just introduced? A: Firstly, the number of nodes doesn’t relate to scalability directly. For example, EOS only have 21 nodes for block produce and Ontology just recently released their testnet with 15 nodes. In Ethereum and Bitcoin, having more nodes even means the slower network. Most people running so many nodes are just for an incentive. Secondly, the reason, the reason why Zilliqa requires so many nodes is that its number of shards depends on the number of nodes (In this case, 250 nodes per shards). However, we don’t have these constraints because of our design. This will give us a lot of benefits to achieve high TPS with a small number of nodes. You will understand more when you see our testnet which will be released pretty soon.
Q: Blockchains are quite competitive. What plans do you have in place to encourage the community to support this project continuously? A: We will continue to post our development process, ecosystem building and many more on our social media including Twitter, Telegram, Medium, Steemit, and Reddit. We will also have marketing campaigns after public testnet.
Q: How is QuarkChain going to address the high inflated TPS claimed by other companies? A: High TPS doesn’t mean everything. Besides high TPS, people also care about security, decentralization, stability, token eco, etc. Even regarding TPS, the critical difference between QuarkChain and other companies is that we allow increasing more TPS (scale-out) on demand while they could quickly hit the TPS limit.
Part 2: Technology Questions
Q: Can QuarkChain achieve decentralization by smaller nodes? How many nodes does QKC need to maintain high TPS that whitepaper suggests at least? A: For your first question, thanks to our sharding technique, we can achieve scalability with smaller number of nodes. For example, suppose our testnet has 21 clusters, and each one has 64 nodes, then the total nodes are 21 * 64 = 1200+. For your second question, it depends on how powerful the node is. Right now, a single power EC2 node can support 10k+ TPS.
Q: In Andre Cronje’s article, he gave 1 issue Quarkchain may face, which is: “So since these are parallel chains, what happens if everyone simply transacts on Ethereum A, and no one uses Ethereum B or C? Well, then Ethereum A becomes congested and it will start suffering throughput. This will cause fees to go up, so now, if you wanted to have a cheap transaction, you could simply process your transaction on Ethereum B, and if both A and B have an equal load share, then you could move to C. This concept is the market-driven collaborative mining, thanks to the reward structure the load is shared across the shards. The problem though, you have all your funds in Ethereum A and you want to now participate to the ICO on Ethereum C, but A is congested and you have to pay high fees to make your transfer from A to C, and not only do you have to pay the high fees, but you have to wait for the root chain to finalize your transactions, adding more overhead. Have you overcome this obstacle? If yes, how did you do that? A: There is a missing part in the article of how we partition system state. Adding more shards will partition some states from existing shards to new shards. Thus, the congestion is inherently solved after re-shard.
Q: For cross-shard consensus, will it use the same mining tools as root chain transaction? A: Cross-shard transaction relies on root chain, so it will use the same consensus as root chain.
Q: I heard that micromanagement that users have to do between their shards would create bad UX. How do you solve this issue? A: This is addressed by smart wallet, which will handle the TX details so that users won’t realize the details.
Seems like a viable alternative to building your own ghash machine: renting out someone else's for a similar fee. I know you can order private server access from colo services, but I don't know of any that outfit their boxes with nice GPUs. Anyone know of some place that does that, maybe something for contributing to [email protected] projects but that can be repurposed for mining? (I realize that this kind of thing would probably be spoken widely about if it existed, but it can't hurt to ask.)
Nonetheless, the ASIC miners have built up an incredible infrastructure, providing unmatched security.
It makes sense for Bitcoin forks to attempt to benefit from the security provided by the existing ASIC infrastructure.
If you disagree with these, there's probably not too much point arguing about the rest.
Meeting the design goals
To meet the design goals, producing blocks with an sha256(sha256(...)) PoW needs to remain possible. Similar reasoning has led people to propose a reduction in difficulty following the fork. I presume that if (say) a fork had signed up 20% of the hash power, then it would set its new difficulty to (around) 20% of the old difficulty. This seems risky though, as the reduction in difficulty would increase the risk of 51% attacks. (While the needed hash power for a 51% attack is the same regardless of the difficulty, with very low difficulty, blocks will arrive much faster, making it much harder to mitigate such attacks.) Additionally, in the event of a "mining heart attack" (a sudden drop in ASIC hash power), it is unlikely that a hard fork with reduced difficulty could be delivered fast enough to prevent a collapse in value. In any case, following a fork, there is likely to be much higher variance in transaction times, as miners move between chains, and the difficulty adjustment algorithm struggles to keep up. People have proposed more responsive difficulty adjustment algorithms, but these produce problems in the longer term, including making certain attacks easier. This suggests that an alternative approach is needed, namely one in which most blocks are produced using the standard PoW, but in an emergency, an alternative CPU mined PoW could take over. The idea of my proposal is to allow the commencement of mining of CPU mined blocks only after a certain time has elapsed, where the passing of time is measured by the production of timing blocks. In normal times, this reduces the variance of the time between blocks, thus reducing the variance of confirmation times, and making Bitcoin more reliable as a means of payment. In crisis times, such as after a fork or "mining heart attack", this enables CPU miners to produce blocks even when ASIC miners are not.
I propose the introduction of two new block types. For clarity, I will call the existing blocks "type A blocks" (A for ASIC). "Type C blocks" (C for CPU) fulfil a similar function to type A blocks, but will be produced with a different algorithm. "Type T blocks" will be small blocks used for timing. Both type C and type T blocks will be CPU-mineable. I will now spell out the details of these new block types.
Type A blocks are virtually identical to their current form.
Type A blocks may follow either type A or type C blocks in the chain. They may not follow type T blocks.
Type C and type T blocks will use a memory-hard PoW, requiring block chain data, such as the UTXO set. (Wild Keccak is one example.)
The difficulty of producing a type C block is always set to 20 times the difficulty of producing a type T block.
Type T blocks may follow either type A, C or T blocks, but no more than 60 type T blocks may be chained in a row.
Type T blocks contain a single coinbase transaction, and no other transactions.
Allowable coinbase transactions for type T blocks take as input the current block reward divided by 80.
The outputs of coinbase transactions from type T blocks are not spendable until followed by a type C block.
Type C blocks may only follow uninterrupted chains of 60 type T blocks.
Type C blocks contain a single coinbase transaction, and arbitrarily many other transactions (subject to the block size limit).
Allowable coinbase transactions for type C blocks take as input the current block reward divided by four, plus the sum of transaction fees from any included transactions.
Note that by construction, the total coinbase outputs of a run of 60 type T blocks and one type C block is 60/80+1/4 = 1 times the block reward, so there is no change to the total number of BTC being produced.
In counting blocks for difficulty adjustment, type T blocks are ignored. Thus the difficulty is adjusted after 2016 type A or C blocks since the last adjustment.
The new difficulty for type A blocks is adjusted as it is currently. ( new_difficulty = max( old_difficulty / 4, min( old_difficulty * 4, old_difficulty * ( two_weeks / time_since_last_adjustment ) ) ) )
The difficulty of a type T block (and hence a type C block) is set according to the formula new_difficulty = max( old_difficulty / 4, min( old_difficulty * 4, old_difficulty * ( two_weeks / time_since_last_adjustment ) * ( num_type_C_blocks / 100 ) ^ ( 1 / 2 ) ), where num_type_C_blocks is the number of type C blocks out of the last 2016 type A or type C. The implicit target here is 100 type C blocks per 2016, meaning a drop in ASIC miner profits of around 5%, which is hopefully not enough to overly annoy them. The slower adjustment to the number of type C blocks reflects the greater sampling variation in num_type_C_blocks and the fact that CPU power changes more slowly than ASIC power.
Note, that with roughly 5% of all profits going to CPU miners in normal times, type T block times should be around 30 seconds, and type C block times should be a bit less than 10 minutes. This is in line with my prior proposal, linked above.
Multiple low difficult "T" blocks are not equivalent to one higher difficulty block, because the variance of the time to produce N difficulty K blocks is lower than the variance of the time to produce one difficulty NK block. (Erlang vs exponential distributions.) The low variance of the time to produce 60 T blocks is what helps ensure that mining C blocks only starts after around 30 minutes, meaning that it only happens when ASIC miners have failed to produce A blocks for some reason.
The initial difficulty of producing type T and C blocks following the fork should be set so that in a hypothetical world in which (a) only one person CPU mined and (b) the price post-fork was equal to the price pre-fork, that one miner would exactly break even in expectation by CPU mining type T and C blocks on Amazon EC2, assuming that they obtained 5% of all block rewards. This is likely to be a substantial under-estimate of the true cost of CPU mining, due to people having access to zero (or at least lower) marginal cost CPU power, but an under-estimate is desirable to provide resilience post-fork.
substantially reduces the variance of block times, increasing Bitcoin's use as a means of payment, and hence (probably) increasing its price,
encourages more people to run full nodes, due to the returns to CPU mining, increasing decentralization,
provides protection from sudden falls in ASIC hash rate, reducing tail risk of holding Bitcoin, and thus again (probably) increasing its price,
helps provide hash power post-fork, without driving away the existing miners and their hardware,
[dev] A quick update, and a work in progress smart contracts guide
As much as it seems odd saying this with a sticky at the top of the subreddit, I do know some people don't read stickies, so - Dogecoin Core 1.10.0 IS OUT NOW and it's a huge security update you really do need. However, it does require reindexing the blocks on disk, and if you absolutely cannot do so (i.e. you run a service that can't handle the downtime right now), there's also Dogecoin Core 1.8.3 which has the most important parts back ported to it. If you use Dogecoin Core, you need to upgrade to one of these two, seriously. On that note, we've got about 20-25% upgraded now; there's an (approximate) pie chart at https://docs.google.com/spreadsheets/d/1qPqLy0FFp2mApJBh9a-_cDERCGxP40pnJIdQwGSDYMo/pubchart?oid=208198724&format=image that you can watch if you're really curious. I'm seeing 1.10.0 nodes come online then go offline - if you can keep a 1.10.0 node online, it would be much appreciated. I've got a few EC2 nodes online while the update rolls out, as well, to help support the numbers. Enough of that, what's coming next? bitcoinj & Multidoge HD work is more or less just rolling along quietly waiting primarily on others at the moment. We're planning out Dogecoin Core 1.11, which will be based on Bitcoin Core 0.12. The big new thing in there will be OP_CHECKLOCKTIMEVERIFY (often shortened to CLTV), which finally lets us use smart contracts securely on the main Dogecoin block chain. It's going out to Bitcoin in their 0.11.2 release, however as we've just released a client, we're going to skip that one (or, it may be produced as a version we test but never release). I promised everyone a guide to smart contracts, and... well it's gone a bit awry. What I thought would be around 6 pages is now at 9 pages and growing, so it's going to take a while to finish. However, it does cover the basics, and hopefully is enough to both let a general audience understand what smart contracts are, and a more technical audience understand how they can use smart contracts. The document so far is up at https://docs.google.com/document/d/1gk74C_AOfRwmhq1WeTHBxMo3Lelh0QQY6iJLj8tQFzI/pub but there should be further revisions later. Lastly, testnet - there's still a lot of old nodes on testnet, please update to 1.10.0, especially if you're mining (because someone's generating old v2 blocks and they're causing problems). I'm away the weekend of the 29th, so that update is likely to be on the 30th instead, but I'll try to get something out that weekend. Might be quiet for a bit while the dust settles on the new release, anyway!
Has anyone tried mining ether on a cloud service? Like where you can rent/buy computational power to do our work? (like amazon web services and such). I would like to know how the performance is on such services. Do they give good computational speed at low costs, enough to profit from ether mining? I plan to shift to this because my PC has a 8GB GTX 850M card which is churning 1.6Mh/s on an average, which is pretty low and simply pointless to continue at this rate.... Would appreciate infact any comments about this! Thank you!
A practical way to put miners back to use and back Bitcoin with compute power.
I've heard time and again "Why doesn't Bitcoin just do [insert useful computation here] computation to secure the network!" Why not [email protected]? Why not cloud computing? Bitcoin verification must satisfy these properties:
Hard to compute.
Easy to verify.
Doesn't depend on something of intrinsic value that could dramatically change.
That last one is important because if [email protected] was done, and then a cure for cancer was found, the value of bitcoin would crash. [email protected] doesn't satisfy the 2nd property anyway. However, we can put miners back to good (profitable) use, and back Bitcoin value with computational power! Amazon currently offers a cloud compute service which charges for it's use by the hour. The Bitcoin network of verifiers currently represents the largest distributed computing network in the world. If we build some software to distribute relatively arbitrary GPGPU computation to miners, and we build a service that offers this computation power to clients, then we can sell Bitcoin mining compute power. Would this reduce the security of the Bitcoin network? Yes and no. The security of the Bitcoin network is about to drop due to mining rigs turning off as a result of non-profitability. We could have these miners turn their rigs back on for the cloud compute service. This wouldn't represent a loss of Bitcoin network security, because we have already lost this security. If this service was only to accept Bitcoins, then the value of Bitcoins would be backed partly by computation power -- much in the way that it is currently backed partly by drug trade. Edit: Some of you have misinterpreted this as a proposal for a new currency that is secured by arbitrary computation. I explained above why that would not be possible. This is a proposal for a service that pools mining power for sale -- payable in Bitcoins. Ex-miners would contribute to the service, and be paid daily for their service. The idea is to back the bitcoin economy with a new merchant service, using powerful equipment that we are about to stop using anyway. Edit #2: People keep bringing up centralization. As if this somehow centralizes the entire currency. Suggesting that this needs to be decentralized is as silly as suggesting that your local barestaurant needs to be "decentralized" before it can accept Bitcoins. This is a merchant service -- not a currency!
Right first of all, assume that full permissions from employers is received first of all. Please don't beat me up too much on this point. Why not have infrastructure on its downtime mining for bitcoin? Lets rule out the production environment. I administer a dozen powerful hypervisor machines that are used nearly exclusively in working hours for internal dev/qa environments. That's a lot of downtime. The only non-working hours tasks are backups, and some load test suites and that's only a fraction of the estate and well documented. Would it not be unreasonable to have a slim *nix box power on many evenings, steal most of the the CPU per hypervisor and mine for all its worth? It'd obviously have to be configured like one would an EC2 spot instance, to easily be killed on contention, but also quick to spawn on idleness. I also have a managed hosting staging environment (managed by me) that does nothing (not even backups) over night. I don't even have to worry about burning out the hardware there due to its SLA. I know bitcoin's a huge bubble waiting to burst, but why not learn more about dynamic task scheduling and put nothing but CPU time (and some sysadmin time) in and get some cash back out? Thoughts?
Click on EC2 (stands for, Elastic Compute Cloud), that will give you GPU horsepower for mining the Ethereum blockchain. ###Step 2 - Setup the pre-built AMI (Amazon Machine Image) on AWS EC2 An Amazon Machine Image (AMI) “provides the information required to launch an instance, which is a virtual server in the cloud”. ec2-bitcoin-mining. Setup for Bitcoin mining on EC2 cg1.4xlarge instances. This makes no economic sense. What this does: Creates an autoscaling rule to start new EC2 instances at spot pricing which will mine for Bitcoins. Total rate per instance is about 170 Mhash/s with lowest spot prices being around $0.37/hr. In part 1, we looked at mining Litecoins on CPUs rented from Amazon EC2. Now, let us see if we can get better performance by mining Litecoins using GPUs. Using CPUs, we were able to achieve an average hash rate of 144 KH/s using Amazon EC2's c3.8xlarge instances, that come with 32 CPUs. Recently,… And again, EC2 instances of the g2, g3, and p2 flavor can run you a pretty penny. We’ve also been forewarned that we’ll be competing with massive bitcoin mining farms that use ASIC miners that blow GPU mining out of the water. Is Ethereum mining on AWS profitable? Ethereum dual-mining profitability comparison (late June 2017). CPU mining for Bitcoins is almost never worthwhile since the advent of GPU mining. If your EC2 instances are free you're welcome to try but there's a better alternative. A decent CPU won't get you but 3-5 MH/s of Bitcoin mining power, which at current difficulty and exchange rates will net you about 0.0046 BTC per day (worth about a penny at ...
genesis mining (a melhor mineradora da atualidade)👇🏻 ... video aula como minerar bitcoin criptomoedas com maquinas virtuais no ubuntu ... (aws - ec2 instance) - duration: 47:15. caveiratech ... Is mining Bitcoin BTC still profitable in 2020? Let's review mining profitability, Bitcoin, Bitcoin Cash, and Bitcoin SV. Block reward halving, network diffi... Bitmain Antminer S9 13,5 TH/s Fazit für mein Mining (Bitcoin Mining) - Duration: 4:04. ... Getting Started with Amazon EC2: Launching a Windows Instance - Duration: 5:49. Amazon Web Services ... In 2014, before Ethereum and altcoin mania, before ICOs and concerns about Tether and Facebook's Libra, Motherboard gained access to a massive and secretive ... The virtual goldrush to mine Bitcoin and other cryptocurrencies leads us to Central Washington state where a Bitcoin mine generates roughly $70,000 a day min...