Compare commits

...

17 Commits

Author SHA1 Message Date
954f6110d6 4.0.0-beta.1 2021-03-03 16:23:38 +01:00
4390b8df12 Add ERC721URIStorage extension (#2555)
(cherry picked from commit 1705067e65)
2021-03-03 16:15:51 +01:00
cb88e15b33 Enable optimizations when publishing package (#2557)
(cherry picked from commit 618a735816)
2021-03-03 16:15:26 +01:00
136de91049 Rename variable master to implementation #2 (#2553)
(cherry picked from commit 103ff8e23d)
2021-03-03 12:33:03 +01:00
e2bf45f262 Rename variable master to implementation
(cherry picked from commit cdb929aada)
2021-03-02 21:30:26 -03:00
93d990c653 Optimize constructor of ERC777 (#2551)
(cherry picked from commit 62af16b9f2)
2021-03-02 21:31:07 +01:00
3dfd02b4b4 Fix link to TimelockController
(cherry picked from commit ba1d773176)
2021-03-02 21:14:33 +01:00
7a7bd8f6d7 Fix typo Controler -> Controller
(cherry picked from commit 583146f9d6)
2021-03-02 21:14:26 +01:00
16312fcfb9 Rename UpgradeableProxy to ERC1967Proxy (#2547)
Co-authored-by: Francisco Giordano <frangio.1@gmail.com>
(cherry picked from commit c789941d76)
2021-03-02 21:14:08 +01:00
a81a88cca0 Fix mentions of buidler (#2548)
(cherry picked from commit 232c355b3a)
2021-03-02 21:14:00 +01:00
5acedf5027 Change title of meta transactions page in docs sidebar
(cherry picked from commit 773c7265e8)
2021-03-02 21:13:51 +01:00
566c601d41 Update lockfile (#2546)
Co-authored-by: Renovate Bot <bot@renovateapp.com>
(cherry picked from commit 742c436d28)
2021-03-02 21:13:39 +01:00
15214a53ce Optimize implementation of ERC20Capped (#2524)
(cherry picked from commit 36b855972b)
2021-03-01 12:09:21 -03:00
d5f4862405 Fix package.json files field
(cherry picked from commit 735c03fcf3)
2021-02-24 00:36:08 -03:00
4a1985f870 Fix migrate-imports.js test for upgradeable paths
(cherry picked from commit 4519c237c5)
2021-02-24 00:20:07 -03:00
ac8279a0a5 Update Antora component version 2021-02-23 23:31:48 -03:00
7cab19a2e4 Check upgradeable paths in migrate-imports test
(cherry picked from commit 4ee9fd77fd)
2021-02-23 23:31:34 -03:00
32 changed files with 10812 additions and 5483 deletions

2
.gitignore vendored
View File

@ -54,6 +54,6 @@ allFiredEvents
.coverage_cache
.coverage_contracts
# buidler
# hardhat
cache
artifacts

View File

@ -14,6 +14,10 @@
* `AccessControl`: removed enumerability by default for a more lightweight contract. It is now opt-in through `AccessControlEnumerable`. ([#2512](https://github.com/OpenZeppelin/openzeppelin-contracts/pull/2512))
* Meta Transactions: add `ERC2771Context` and a `MinimalForwarder` for meta-transactions. ([#2508](https://github.com/OpenZeppelin/openzeppelin-contracts/pull/2508))
* Overall reorganisation of the contract folder to improve clarity and discoverability. ([#2503](https://github.com/OpenZeppelin/openzeppelin-contracts/pull/2503))
* `ERC20Capped`: optimize gas usage of by enforcing te check directly in `_mint`. ([#2524](https://github.com/OpenZeppelin/openzeppelin-contracts/pull/2524))
* Rename `UpgradeableProxy` to `ERC1967Proxy`. ([#2547](https://github.com/OpenZeppelin/openzeppelin-contracts/pull/2547))
* `ERC777`: Optimize the gas costs of the constructor. ([#2551](https://github.com/OpenZeppelin/openzeppelin-contracts/pull/2551))
* `ERC721TokenUri`: Add a new extension ERC721TokenUri that implements the tokenURI behavior as it was available in 3.4.0. ([#2555](https://github.com/OpenZeppelin/openzeppelin-contracts/pull/2555))
### How to upgrade from 3.x

View File

@ -26,7 +26,7 @@ This directory includes primitives for on-chain governance. We currently only of
[[timelock-operation]]
==== Operation structure
Operation executed by the xref:api:access.adoc#TimelockController[`TimelockControler`] can contain one or multiple subsequent calls. Depending on whether you need to multiple calls to be executed atomically, you can either use simple or batched operations.
Operation executed by the xref:api:governance.adoc#TimelockController[`TimelockControler`] can contain one or multiple subsequent calls. Depending on whether you need to multiple calls to be executed atomically, you can either use simple or batched operations.
Both operations contain:
@ -50,16 +50,16 @@ Timelocked operations are identified by a unique id (their hash) and follow a sp
`Unset` -> `Pending` -> `Pending` + `Ready` -> `Done`
* By calling xref:api:access.adoc#TimelockController-schedule-address-uint256-bytes-bytes32-bytes32-uint256-[`schedule`] (or xref:api:access.adoc#TimelockController-scheduleBatch-address---uint256---bytes---bytes32-bytes32-uint256-[`scheduleBatch`]), a proposer moves the operation from the `Unset` to the `Pending` state. This starts a timer that must be longer than the minimum delay. The timer expires at a timestamp accessible through the xref:api:access.adoc#TimelockController-getTimestamp-bytes32-[`getTimestamp`] method.
* By calling xref:api:governance.adoc#TimelockController-schedule-address-uint256-bytes-bytes32-bytes32-uint256-[`schedule`] (or xref:api:governance.adoc#TimelockController-scheduleBatch-address---uint256---bytes---bytes32-bytes32-uint256-[`scheduleBatch`]), a proposer moves the operation from the `Unset` to the `Pending` state. This starts a timer that must be longer than the minimum delay. The timer expires at a timestamp accessible through the xref:api:governance.adoc#TimelockController-getTimestamp-bytes32-[`getTimestamp`] method.
* Once the timer expires, the operation automatically gets the `Ready` state. At this point, it can be executed.
* By calling xref:api:access.adoc#TimelockController-TimelockController-execute-address-uint256-bytes-bytes32-bytes32-[`execute`] (or xref:api:access.adoc#TimelockController-executeBatch-address---uint256---bytes---bytes32-bytes32-[`executeBatch`]), an executor triggers the operation's underlying transactions and moves it to the `Done` state. If the operation has a predecessor, it has to be in the `Done` state for this transition to succeed.
* xref:api:access.adoc#TimelockController-TimelockController-cancel-bytes32-[`cancel`] allows proposers to cancel any `Pending` operation. This resets the operation to the `Unset` state. It is thus possible for a proposer to re-schedule an operation that has been cancelled. In this case, the timer restarts when the operation is re-scheduled.
* By calling xref:api:governance.adoc#TimelockController-TimelockController-execute-address-uint256-bytes-bytes32-bytes32-[`execute`] (or xref:api:governance.adoc#TimelockController-executeBatch-address---uint256---bytes---bytes32-bytes32-[`executeBatch`]), an executor triggers the operation's underlying transactions and moves it to the `Done` state. If the operation has a predecessor, it has to be in the `Done` state for this transition to succeed.
* xref:api:governance.adoc#TimelockController-TimelockController-cancel-bytes32-[`cancel`] allows proposers to cancel any `Pending` operation. This resets the operation to the `Unset` state. It is thus possible for a proposer to re-schedule an operation that has been cancelled. In this case, the timer restarts when the operation is re-scheduled.
Operations status can be queried using the functions:
* xref:api:access.adoc#TimelockController-isOperationPending-bytes32-[`isOperationPending(bytes32)`]
* xref:api:access.adoc#TimelockController-isOperationReady-bytes32-[`isOperationReady(bytes32)`]
* xref:api:access.adoc#TimelockController-isOperationDone-bytes32-[`isOperationDone(bytes32)`]
* xref:api:governance.adoc#TimelockController-isOperationPending-bytes32-[`isOperationPending(bytes32)`]
* xref:api:governance.adoc#TimelockController-isOperationReady-bytes32-[`isOperationReady(bytes32)`]
* xref:api:governance.adoc#TimelockController-isOperationDone-bytes32-[`isOperationDone(bytes32)`]
[[timelock-roles]]
==== Roles

View File

@ -11,16 +11,16 @@ contract ClonesMock {
event NewInstance(address instance);
function clone(address master, bytes calldata initdata) public payable {
_initAndEmit(master.clone(), initdata);
function clone(address implementation, bytes calldata initdata) public payable {
_initAndEmit(implementation.clone(), initdata);
}
function cloneDeterministic(address master, bytes32 salt, bytes calldata initdata) public payable {
_initAndEmit(master.cloneDeterministic(salt), initdata);
function cloneDeterministic(address implementation, bytes32 salt, bytes calldata initdata) public payable {
_initAndEmit(implementation.cloneDeterministic(salt), initdata);
}
function predictDeterministicAddress(address master, bytes32 salt) public view returns (address predicted) {
return master.predictDeterministicAddress(salt);
function predictDeterministicAddress(address implementation, bytes32 salt) public view returns (address predicted) {
return implementation.predictDeterministicAddress(salt);
}
function _initAndEmit(address instance, bytes memory initdata) private {

View File

@ -7,7 +7,19 @@ import "../token/ERC721/extensions/ERC721Burnable.sol";
contract ERC721BurnableMock is ERC721Burnable {
constructor(string memory name, string memory symbol) ERC721(name, symbol) { }
function exists(uint256 tokenId) public view returns (bool) {
return _exists(tokenId);
}
function mint(address to, uint256 tokenId) public {
_mint(to, tokenId);
}
function safeMint(address to, uint256 tokenId) public {
_safeMint(to, tokenId);
}
function safeMint(address to, uint256 tokenId, bytes memory _data) public {
_safeMint(to, tokenId, _data);
}
}

View File

@ -25,6 +25,10 @@ contract ERC721EnumerableMock is ERC721Enumerable {
return _baseURI();
}
function exists(uint256 tokenId) public view returns (bool) {
return _exists(tokenId);
}
function mint(address to, uint256 tokenId) public {
_mint(to, tokenId);
}

View File

@ -11,18 +11,6 @@ import "../token/ERC721/extensions/ERC721Pausable.sol";
contract ERC721PausableMock is ERC721Pausable {
constructor (string memory name, string memory symbol) ERC721(name, symbol) { }
function mint(address to, uint256 tokenId) public {
super._mint(to, tokenId);
}
function burn(uint256 tokenId) public {
super._burn(tokenId);
}
function exists(uint256 tokenId) public view returns (bool) {
return super._exists(tokenId);
}
function pause() external {
_pause();
}
@ -30,4 +18,24 @@ contract ERC721PausableMock is ERC721Pausable {
function unpause() external {
_unpause();
}
function exists(uint256 tokenId) public view returns (bool) {
return _exists(tokenId);
}
function mint(address to, uint256 tokenId) public {
_mint(to, tokenId);
}
function safeMint(address to, uint256 tokenId) public {
_safeMint(to, tokenId);
}
function safeMint(address to, uint256 tokenId, bytes memory _data) public {
_safeMint(to, tokenId, _data);
}
function burn(uint256 tokenId) public {
_burn(tokenId);
}
}

View File

@ -0,0 +1,51 @@
// SPDX-License-Identifier: MIT
pragma solidity ^0.8.0;
import "../token/ERC721/extensions/ERC721URIStorage.sol";
/**
* @title ERC721Mock
* This mock just provides a public safeMint, mint, and burn functions for testing purposes
*/
contract ERC721URIStorageMock is ERC721URIStorage {
string private _baseTokenURI;
constructor (string memory name, string memory symbol) ERC721(name, symbol) { }
function _baseURI() internal view virtual override returns (string memory) {
return _baseTokenURI;
}
function setBaseURI(string calldata newBaseTokenURI) public {
_baseTokenURI = newBaseTokenURI;
}
function baseURI() public view returns (string memory) {
return _baseURI();
}
function setTokenURI(uint256 tokenId, string memory _tokenURI) public {
_setTokenURI(tokenId, _tokenURI);
}
function exists(uint256 tokenId) public view returns (bool) {
return _exists(tokenId);
}
function mint(address to, uint256 tokenId) public {
_mint(to, tokenId);
}
function safeMint(address to, uint256 tokenId) public {
_safeMint(to, tokenId);
}
function safeMint(address to, uint256 tokenId, bytes memory _data) public {
_safeMint(to, tokenId, _data);
}
function burn(uint256 tokenId) public {
_burn(tokenId);
}
}

View File

@ -1,12 +1,11 @@
{
"name": "@openzeppelin/contracts",
"description": "Secure Smart Contract library for Solidity",
"version": "4.0.0-beta.0",
"version": "4.0.0-beta.1",
"files": [
"**/*.sol",
"/build/contracts/*.json",
"!/mocks",
"!/examples"
"!/mocks/**/*"
],
"scripts": {
"prepare": "bash ../scripts/prepare-contracts-package.sh",

View File

@ -17,16 +17,16 @@ pragma solidity ^0.8.0;
*/
library Clones {
/**
* @dev Deploys and returns the address of a clone that mimics the behaviour of `master`.
* @dev Deploys and returns the address of a clone that mimics the behaviour of `implementation`.
*
* This function uses the create opcode, which should never revert.
*/
function clone(address master) internal returns (address instance) {
function clone(address implementation) internal returns (address instance) {
// solhint-disable-next-line no-inline-assembly
assembly {
let ptr := mload(0x40)
mstore(ptr, 0x3d602d80600a3d3981f3363d3d373d3d3d363d73000000000000000000000000)
mstore(add(ptr, 0x14), shl(0x60, master))
mstore(add(ptr, 0x14), shl(0x60, implementation))
mstore(add(ptr, 0x28), 0x5af43d82803e903d91602b57fd5bf30000000000000000000000000000000000)
instance := create(0, ptr, 0x37)
}
@ -34,18 +34,18 @@ library Clones {
}
/**
* @dev Deploys and returns the address of a clone that mimics the behaviour of `master`.
* @dev Deploys and returns the address of a clone that mimics the behaviour of `implementation`.
*
* This function uses the create2 opcode and a `salt` to deterministically deploy
* the clone. Using the same `master` and `salt` multiple time will revert, since
* the clone. Using the same `implementation` and `salt` multiple time will revert, since
* the clones cannot be deployed twice at the same address.
*/
function cloneDeterministic(address master, bytes32 salt) internal returns (address instance) {
function cloneDeterministic(address implementation, bytes32 salt) internal returns (address instance) {
// solhint-disable-next-line no-inline-assembly
assembly {
let ptr := mload(0x40)
mstore(ptr, 0x3d602d80600a3d3981f3363d3d373d3d3d363d73000000000000000000000000)
mstore(add(ptr, 0x14), shl(0x60, master))
mstore(add(ptr, 0x14), shl(0x60, implementation))
mstore(add(ptr, 0x28), 0x5af43d82803e903d91602b57fd5bf30000000000000000000000000000000000)
instance := create2(0, ptr, 0x37, salt)
}
@ -55,12 +55,12 @@ library Clones {
/**
* @dev Computes the address of a clone deployed using {Clones-cloneDeterministic}.
*/
function predictDeterministicAddress(address master, bytes32 salt, address deployer) internal pure returns (address predicted) {
function predictDeterministicAddress(address implementation, bytes32 salt, address deployer) internal pure returns (address predicted) {
// solhint-disable-next-line no-inline-assembly
assembly {
let ptr := mload(0x40)
mstore(ptr, 0x3d602d80600a3d3981f3363d3d373d3d3d363d73000000000000000000000000)
mstore(add(ptr, 0x14), shl(0x60, master))
mstore(add(ptr, 0x14), shl(0x60, implementation))
mstore(add(ptr, 0x28), 0x5af43d82803e903d91602b57fd5bf3ff00000000000000000000000000000000)
mstore(add(ptr, 0x38), shl(0x60, deployer))
mstore(add(ptr, 0x4c), salt)
@ -72,7 +72,7 @@ library Clones {
/**
* @dev Computes the address of a clone deployed using {Clones-cloneDeterministic}.
*/
function predictDeterministicAddress(address master, bytes32 salt) internal view returns (address predicted) {
return predictDeterministicAddress(master, salt, address(this));
function predictDeterministicAddress(address implementation, bytes32 salt) internal view returns (address predicted) {
return predictDeterministicAddress(implementation, salt, address(this));
}
}

View File

@ -2,8 +2,8 @@
pragma solidity ^0.8.0;
import "./Proxy.sol";
import "../utils/Address.sol";
import "../Proxy.sol";
import "../../utils/Address.sol";
/**
* @dev This contract implements an upgradeable proxy. It is upgradeable because calls are delegated to an
@ -14,7 +14,7 @@ import "../utils/Address.sol";
* Upgradeability is only provided internally through {_upgradeTo}. For an externally upgradeable proxy see
* {TransparentUpgradeableProxy}.
*/
contract UpgradeableProxy is Proxy {
contract ERC1967Proxy is Proxy {
/**
* @dev Initializes the upgradeable proxy with an initial implementation specified by `_logic`.
*
@ -66,7 +66,7 @@ contract UpgradeableProxy is Proxy {
* @dev Stores a new address in the EIP1967 implementation slot.
*/
function _setImplementation(address newImplementation) private {
require(Address.isContract(newImplementation), "UpgradeableProxy: new implementation is not a contract");
require(Address.isContract(newImplementation), "ERC1967Proxy: new implementation is not a contract");
bytes32 slot = _IMPLEMENTATION_SLOT;

View File

@ -7,19 +7,21 @@ This is a low-level set of contracts implementing different proxy patterns with
The abstract {Proxy} contract implements the core delegation functionality. If the concrete proxies that we provide below are not suitable, we encourage building on top of this base contract since it contains an assembly block that may be hard to get right.
Upgradeability is implemented in the {UpgradeableProxy} contract, although it provides only an internal upgrade interface. For an upgrade interface exposed externally to an admin, we provide {TransparentUpgradeableProxy}. Both of these contracts use the storage slots specified in https://eips.ethereum.org/EIPS/eip-1967[EIP1967] to avoid clashes with the storage of the implementation contract behind the proxy.
{ERC1967Proxy} provides a simple, fully functioning, proxy. While this proxy is not by itself upgradeable, it includes an internal upgrade interface. For an upgrade interface exposed externally to an admin, we provide {TransparentUpgradeableProxy}. Both of these contracts use the storage slots specified in https://eips.ethereum.org/EIPS/eip-1967[EIP1967] to avoid clashes with the storage of the implementation contract behind the proxy.
An alternative upgradeability mechanism is provided in <<Beacon>>. This pattern, popularized by Dharma, allows multiple proxies to be upgraded to a different implementation in a single transaction. In this pattern, the proxy contract doesn't hold the implementation address in storage like {UpgradeableProxy}, but the address of a {UpgradeableBeacon} contract, which is where the implementation address is actually stored and retrieved from. The `upgrade` operations that change the implementation contract address are then sent to the beacon instead of to the proxy contract, and all proxies that follow that beacon are automatically upgraded.
The {Clones} library provides a way to deploy minimal non-upgradeable proxies for cheap. This can be useful for applications that require deploying many instances of the same contract (for example one per user, or one per task). These instances are designed to be both cheap to deploy, and cheap to call. The drawback being that they are not upgradeable.
CAUTION: Using upgradeable proxies correctly and securely is a difficult task that requires deep knowledge of the proxy pattern, Solidity, and the EVM. Unless you want a lot of low level control, we recommend using the xref:upgrades-plugins::index.adoc[OpenZeppelin Upgrades Plugins] for Truffle and Buidler.
CAUTION: Using upgradeable proxies correctly and securely is a difficult task that requires deep knowledge of the proxy pattern, Solidity, and the EVM. Unless you want a lot of low level control, we recommend using the xref:upgrades-plugins::index.adoc[OpenZeppelin Upgrades Plugins] for Truffle and Hardhat.
== Core
{{Proxy}}
{{UpgradeableProxy}}
== ERC1967
{{ERC1967Proxy}}
== Transparent Proxy

View File

@ -2,7 +2,7 @@
pragma solidity ^0.8.0;
import "../UpgradeableProxy.sol";
import "../ERC1967/ERC1967Proxy.sol";
/**
* @dev This contract implements a proxy that is upgradeable by an admin.
@ -25,12 +25,12 @@ import "../UpgradeableProxy.sol";
* Our recommendation is for the dedicated account to be an instance of the {ProxyAdmin} contract. If set up this way,
* you should think of the `ProxyAdmin` instance as the real administrative interface of your proxy.
*/
contract TransparentUpgradeableProxy is UpgradeableProxy {
contract TransparentUpgradeableProxy is ERC1967Proxy {
/**
* @dev Initializes an upgradeable proxy managed by `_admin`, backed by the implementation at `_logic`, and
* optionally initialized with `_data` as explained in {UpgradeableProxy-constructor}.
*/
constructor(address _logic, address admin_, bytes memory _data) payable UpgradeableProxy(_logic, _data) {
constructor(address _logic, address admin_, bytes memory _data) payable ERC1967Proxy(_logic, _data) {
assert(_ADMIN_SLOT == bytes32(uint256(keccak256("eip1967.proxy.admin")) - 1));
_setAdmin(admin_);
}

View File

@ -27,17 +27,10 @@ abstract contract ERC20Capped is ERC20 {
}
/**
* @dev See {ERC20-_beforeTokenTransfer}.
*
* Requirements:
*
* - minted tokens must not cause the total supply to go over the cap.
* @dev See {ERC20-_mint}.
*/
function _beforeTokenTransfer(address from, address to, uint256 amount) internal virtual override {
super._beforeTokenTransfer(from, to, amount);
if (from == address(0)) { // When minting tokens
require(totalSupply() + amount <= cap(), "ERC20Capped: cap exceeded");
}
function _mint(address account, uint256 amount) internal virtual override {
require(ERC20.totalSupply() + amount <= cap(), "ERC20Capped: cap exceeded");
super._mint(account, amount);
}
}

View File

@ -39,6 +39,8 @@ NOTE: This core set of contracts is designed to be unopinionated, allowing devel
{{ERC721Burnable}}
{{ERC721TokenUri}}
== Presets
These contracts are preconfigured combinations of the above features. They can be used through inheritance or as models to copy and paste their source code.

View File

@ -0,0 +1,66 @@
// SPDX-License-Identifier: MIT
pragma solidity ^0.8.0;
import "../ERC721.sol";
/**
* @dev ERC721 token with storage based token uri management.
*/
abstract contract ERC721URIStorage is ERC721 {
using Strings for uint256;
// Optional mapping for token URIs
mapping (uint256 => string) private _tokenURIs;
/**
* @dev See {IERC721Metadata-tokenURI}.
*/
function tokenURI(uint256 tokenId) public view virtual override returns (string memory) {
require(_exists(tokenId), "ERC721URIStorage: URI query for nonexistent token");
string memory _tokenURI = _tokenURIs[tokenId];
string memory base = _baseURI();
// If there is no base URI, return the token URI.
if (bytes(base).length == 0) {
return _tokenURI;
}
// If both are set, concatenate the baseURI and tokenURI (via abi.encodePacked).
if (bytes(_tokenURI).length > 0) {
return string(abi.encodePacked(base, _tokenURI));
}
return super.tokenURI(tokenId);
}
/**
* @dev Sets `_tokenURI` as the tokenURI of `tokenId`.
*
* Requirements:
*
* - `tokenId` must exist.
*/
function _setTokenURI(uint256 tokenId, string memory _tokenURI) internal virtual {
require(_exists(tokenId), "ERC721URIStorage: URI set of nonexistent token");
_tokenURIs[tokenId] = _tokenURI;
}
/**
* @dev Destroys `tokenId`.
* The approval is cleared when the token is burned.
*
* Requirements:
*
* - `tokenId` must exist.
*
* Emits a {Transfer} event.
*/
function _burn(uint256 tokenId) internal virtual override {
super._burn(tokenId);
if (bytes(_tokenURIs[tokenId]).length != 0) {
delete _tokenURIs[tokenId];
}
}
}

View File

@ -65,8 +65,8 @@ contract ERC777 is Context, IERC777, IERC20 {
_symbol = symbol_;
_defaultOperatorsArray = defaultOperators_;
for (uint256 i = 0; i < _defaultOperatorsArray.length; i++) {
_defaultOperators[_defaultOperatorsArray[i]] = true;
for (uint256 i = 0; i < defaultOperators_.length; i++) {
_defaultOperators[defaultOperators_[i]] = true;
}
// register interfaces

View File

@ -1,6 +1,6 @@
name: contracts
title: Contracts
version: 3.x
version: 4.x
nav:
- modules/ROOT/nav.adoc
- modules/api/nav.adoc

View File

@ -190,15 +190,15 @@ for (let i = 0; i < minterCount; ++i) {
Access control is essential to prevent unauthorized access to critical functions. These functions may be used to mint tokens, freeze transfers or perform an upgrade that completely changes the smart contract logic. While xref:api:access.adoc#Ownable[`Ownable`] and xref:api:access.adoc#AccessControl[`AccessControl`] can prevent unauthorized access, they do not address the issue of a misbehaving administrator attacking their own system to the prejudice of their users.
This is the issue the xref:api:access.adoc#TimelockController[`TimelockControler`] is addressing.
This is the issue the xref:api:governance.adoc#TimelockController[`TimelockController`] is addressing.
The xref:api:access.adoc#TimelockController[`TimelockControler`] is a proxy that is governed by proposers and executors. When set as the owner/admin/controller of a smart contract, it ensures that whichever maintenance operation is ordered by the proposers is subject to a delay. This delay protects the users of the smart contract by giving them time to review the maintenance operation and exit the system if they consider it is in their best interest to do so.
The xref:api:governance.adoc#TimelockController[`TimelockController`] is a proxy that is governed by proposers and executors. When set as the owner/admin/controller of a smart contract, it ensures that whichever maintenance operation is ordered by the proposers is subject to a delay. This delay protects the users of the smart contract by giving them time to review the maintenance operation and exit the system if they consider it is in their best interest to do so.
=== Using `TimelockControler`
=== Using `TimelockController`
By default, the address that deployed the xref:api:access.adoc#TimelockController[`TimelockControler`] gets administration privileges over the timelock. This role grants the right to assign proposers, executors, and other administrators.
By default, the address that deployed the xref:api:governance.adoc#TimelockController[`TimelockController`] gets administration privileges over the timelock. This role grants the right to assign proposers, executors, and other administrators.
The first step in configuring the xref:api:access.adoc#TimelockController[`TimelockControler`] is to assign at least one proposer and one executor. These can be assigned during construction or later by anyone with the administrator role. These roles are not exclusive, meaning an account can have both roles.
The first step in configuring the xref:api:governance.adoc#TimelockController[`TimelockController`] is to assign at least one proposer and one executor. These can be assigned during construction or later by anyone with the administrator role. These roles are not exclusive, meaning an account can have both roles.
Roles are managed using the xref:api:access.adoc#AccessControl[`AccessControl`] interface and the `bytes32` values for each role are accessible through the `ADMIN_ROLE`, `PROPOSER_ROLE` and `EXECUTOR_ROLE` constants.
@ -216,6 +216,6 @@ TIP: A recommended configuration is to grant both roles to a secure governance c
=== Minimum delay
Operations executed by the xref:api:access.adoc#TimelockController[`TimelockControler`] are not subject to a fixed delay but rather a minimum delay. Some major updates might call for a longer delay. For example, if a delay of just a few days might be sufficient for users to audit a minting operation, it makes sense to use a delay of a few weeks, or even a few months, when scheduling a smart contract upgrade.
Operations executed by the xref:api:governance.adoc#TimelockController[`TimelockController`] are not subject to a fixed delay but rather a minimum delay. Some major updates might call for a longer delay. For example, if a delay of just a few days might be sufficient for users to audit a minting operation, it makes sense to use a delay of a few weeks, or even a few months, when scheduling a smart contract upgrade.
The minimum delay (accessible through the xref:api:access.adoc#TimelockController-getMinDelay--[`getMinDelay`] method) can be updated by calling the xref:api:access.adoc#TimelockController-updateDelay-uint256-[`updateDelay`] function. Bear in mind that access to this function is only accessible by the timelock itself, meaning this maintenance operation has to go through the timelock itself.
The minimum delay (accessible through the xref:api:governance.adoc#TimelockController-getMinDelay--[`getMinDelay`] method) can be updated by calling the xref:api:governance.adoc#TimelockController-updateDelay-uint256-[`updateDelay`] function. Bear in mind that access to this function is only accessible by the timelock itself, meaning this maintenance operation has to go through the timelock itself.

View File

@ -11,6 +11,7 @@ for (const f of fs.readdirSync(path.join(__dirname, 'hardhat'))) {
}
const enableGasReport = !!process.env.ENABLE_GAS_REPORT;
const enableProduction = process.env.COMPILE_MODE === 'production';
/**
* @type import('hardhat/config').HardhatUserConfig
@ -20,7 +21,7 @@ module.exports = {
version: '0.8.0',
settings: {
optimizer: {
enabled: enableGasReport,
enabled: enableGasReport || enableProduction,
runs: 200,
},
},

15862
package-lock.json generated

File diff suppressed because it is too large Load Diff

View File

@ -1,12 +1,11 @@
{
"name": "openzeppelin-solidity",
"description": "Secure Smart Contract library for Solidity",
"version": "4.0.0-beta.0",
"version": "4.0.0-beta.1",
"files": [
"/contracts/**/*.sol",
"/build/contracts/*.json",
"!/contracts/mocks",
"/test/behaviors"
"!/contracts/mocks/**/*"
],
"bin": {
"openzeppelin-contracts-migrate-imports": "scripts/migrate-imports.js"
@ -23,7 +22,7 @@
"lint:js:fix": "eslint --ignore-path .gitignore . --fix",
"lint:sol": "echo 'solidity linter currently disabled' # solhint --max-warnings 0 \"contracts/**/*.sol\"",
"prepublish": "rimraf build contracts/build artifacts cache",
"prepare": "npm run compile",
"prepare": "env COMPILE_MODE=production npm run compile",
"prepack": "scripts/prepack.sh",
"release": "scripts/release/release.sh",
"version": "scripts/release/version.sh",

View File

@ -12,12 +12,20 @@ const files = proc.execFileSync(
console.log('.API');
function getPageTitle (directory) {
if (directory === 'metatx') {
return 'Meta Transactions';
} else {
return startCase(directory);
}
}
const links = files.map((file) => {
const doc = file.replace(baseDir, '');
const title = path.parse(file).name;
return {
xref: `* xref:${doc}[${startCase(title)}]`,
xref: `* xref:${doc}[${getPageTitle(title)}]`,
title,
};
});

View File

@ -163,6 +163,7 @@ function getUpgradeablePath (file) {
module.exports = {
pathUpdates,
updateImportPaths,
getUpgradeablePath,
};
if (require.main === module) {

View File

@ -2,12 +2,16 @@ const path = require('path');
const { promises: fs, constants: { F_OK } } = require('fs');
const { expect } = require('chai');
const { pathUpdates, updateImportPaths } = require('../scripts/migrate-imports.js');
const { pathUpdates, updateImportPaths, getUpgradeablePath } = require('../scripts/migrate-imports.js');
describe('migrate-imports.js', function () {
it('every new path exists', async function () {
for (const p of Object.values(pathUpdates)) {
await fs.access(path.join('contracts', p), F_OK);
try {
await fs.access(path.join('contracts', p), F_OK);
} catch (e) {
await fs.access(path.join('contracts', getUpgradeablePath(p)), F_OK);
}
}
});

View File

@ -0,0 +1,13 @@
const shouldBehaveLikeProxy = require('../Proxy.behaviour');
const ERC1967Proxy = artifacts.require('ERC1967Proxy');
contract('ERC1967Proxy', function (accounts) {
const [proxyAdminOwner] = accounts;
const createProxy = async function (implementation, _admin, initData, opts) {
return ERC1967Proxy.new(implementation, initData, opts);
};
shouldBehaveLikeProxy(createProxy, undefined, proxyAdminOwner);
});

View File

@ -11,7 +11,7 @@ function toChecksumAddress (address) {
return ethereumjsUtil.toChecksumAddress('0x' + address.replace(/^0x/, '').padStart(40, '0'));
}
module.exports = function shouldBehaveLikeUpgradeableProxy (createProxy, proxyAdminAddress, proxyCreator) {
module.exports = function shouldBehaveLikeProxy (createProxy, proxyAdminAddress, proxyCreator) {
it('cannot be initialized with a non-contract address', async function () {
const nonContractAddress = proxyCreator;
const initializeData = Buffer.from('');

View File

@ -1,13 +0,0 @@
const shouldBehaveLikeUpgradeableProxy = require('./UpgradeableProxy.behaviour');
const UpgradeableProxy = artifacts.require('UpgradeableProxy');
contract('UpgradeableProxy', function (accounts) {
const [proxyAdminOwner] = accounts;
const createProxy = async function (implementation, _admin, initData, opts) {
return UpgradeableProxy.new(implementation, initData, opts);
};
shouldBehaveLikeUpgradeableProxy(createProxy, undefined, proxyAdminOwner);
});

View File

@ -80,7 +80,7 @@ module.exports = function shouldBehaveLikeTransparentUpgradeableProxy (createPro
it('reverts', async function () {
await expectRevert(
this.proxy.upgradeTo(ZERO_ADDRESS, { from }),
'UpgradeableProxy: new implementation is not a contract',
'ERC1967Proxy: new implementation is not a contract',
);
});
});

View File

@ -1,4 +1,4 @@
const shouldBehaveLikeUpgradeableProxy = require('../UpgradeableProxy.behaviour');
const shouldBehaveLikeProxy = require('../Proxy.behaviour');
const shouldBehaveLikeTransparentUpgradeableProxy = require('./TransparentUpgradeableProxy.behaviour');
const TransparentUpgradeableProxy = artifacts.require('TransparentUpgradeableProxy');
@ -10,6 +10,6 @@ contract('TransparentUpgradeableProxy', function (accounts) {
return TransparentUpgradeableProxy.new(logic, admin, initData, opts);
};
shouldBehaveLikeUpgradeableProxy(createProxy, proxyAdminAddress, proxyAdminOwner);
shouldBehaveLikeProxy(createProxy, proxyAdminAddress, proxyAdminOwner);
shouldBehaveLikeTransparentUpgradeableProxy(createProxy, accounts);
});

View File

@ -0,0 +1,87 @@
const { BN, expectRevert } = require('@openzeppelin/test-helpers');
const { expect } = require('chai');
const ERC721URIStorageMock = artifacts.require('ERC721URIStorageMock');
contract('ERC721URIStorage', function (accounts) {
const [ owner ] = accounts;
const name = 'Non Fungible Token';
const symbol = 'NFT';
const firstTokenId = new BN('5042');
const nonExistentTokenId = new BN('13');
beforeEach(async function () {
this.token = await ERC721URIStorageMock.new(name, symbol);
});
describe('token URI', function () {
beforeEach(async function () {
await this.token.mint(owner, firstTokenId);
});
const baseURI = 'https://api.com/v1/';
const sampleUri = 'mock://mytoken';
it('it is empty by default', async function () {
expect(await this.token.tokenURI(firstTokenId)).to.be.equal('');
});
it('reverts when queried for non existent token id', async function () {
await expectRevert(
this.token.tokenURI(nonExistentTokenId), 'ERC721URIStorage: URI query for nonexistent token',
);
});
it('can be set for a token id', async function () {
await this.token.setTokenURI(firstTokenId, sampleUri);
expect(await this.token.tokenURI(firstTokenId)).to.be.equal(sampleUri);
});
it('reverts when setting for non existent token id', async function () {
await expectRevert(
this.token.setTokenURI(nonExistentTokenId, sampleUri), 'ERC721URIStorage: URI set of nonexistent token',
);
});
it('base URI can be set', async function () {
await this.token.setBaseURI(baseURI);
expect(await this.token.baseURI()).to.equal(baseURI);
});
it('base URI is added as a prefix to the token URI', async function () {
await this.token.setBaseURI(baseURI);
await this.token.setTokenURI(firstTokenId, sampleUri);
expect(await this.token.tokenURI(firstTokenId)).to.be.equal(baseURI + sampleUri);
});
it('token URI can be changed by changing the base URI', async function () {
await this.token.setBaseURI(baseURI);
await this.token.setTokenURI(firstTokenId, sampleUri);
const newBaseURI = 'https://api.com/v2/';
await this.token.setBaseURI(newBaseURI);
expect(await this.token.tokenURI(firstTokenId)).to.be.equal(newBaseURI + sampleUri);
});
it('tokenId is appended to base URI for tokens with no URI', async function () {
await this.token.setBaseURI(baseURI);
expect(await this.token.tokenURI(firstTokenId)).to.be.equal(baseURI + firstTokenId);
});
it('tokens with URI can be burnt ', async function () {
await this.token.setTokenURI(firstTokenId, sampleUri);
await this.token.burn(firstTokenId, { from: owner });
expect(await this.token.exists(firstTokenId)).to.equal(false);
await expectRevert(
this.token.tokenURI(firstTokenId), 'ERC721URIStorage: URI query for nonexistent token',
);
});
});
});