Win10 DISM离线安装.net framework
DISM(Deployment Image Servicing and Management)就是部署映像服务和管理 (DISM.exe) 用于安装、卸载、配置和更新脱机 Windows(R) 映像和脱机 Windows 预安装环境 (Windows PE) 映像中的功能和程序包。
安装.net framework, windows系统安装盘位置为E:盘
Dism /Online /Enable-Feature /FeatureName:netfx3 /Source:E:\sources\sxs |
Dism /Online /Enable-Feature /FeatureName:netfx3 /Source:E:\sources\sxs
其他用途
1.扫描映像,查看映像是否有损坏(有损坏时电脑会遇到许多小问题,比如可能无法更新系统)
Dism /Online /Cleanup-Image /ScanHealth |
Dism /Online /Cleanup-Image /ScanHealth
2.最后是修复系统映像文件
Dism /Online /Cleanup-Image /RestoreHealth |
Dism /Online /Cleanup-Image /RestoreHealth
使用本地源修复镜像,可以是windows安装光盘,或者虚拟光驱加载ISO文件
Dism /Online /Cleanup-Image /RestoreHealth /Source:c:\test\mount\windows /LimitAccess |
Dism /Online /Cleanup-Image /RestoreHealth /Source:c:\test\mount\windows /LimitAccess
PE环境下也可以用ImageFile方式,直接使用windows image格式文件
Dism /Apply-Image /ImageFile:X:sourcesinstall.wim /Index:1 /ApplyDir:C: |
Dism /Apply-Image /ImageFile:X:sourcesinstall.wim /Index:1 /ApplyDir:C:
参考
https://docs.microsoft.com/en-us/windows-hardware/manufacture/desktop/what-is-dism
Anaconda或者miniconda在容器中安装以后,需要手动执行一下
conda init以后才可以激活相应的环境
假设conda的安装目录prefix为
查看init以后的~/.bashrc,发现conda是根据shell的类型执行相应的安装
__conda_setup="$('/opt/conda/bin/conda' 'shell.bash' 'hook' 2> /dev/null)"
if [ $? -eq 0 ]; then
eval "$__conda_setup"
else
if [ -f "/opt/conda/etc/profile.d/conda.sh" ]; then
. "/opt/conda/etc/profile.d/conda.sh"
else
export PATH="/opt/conda/bin:$PATH"
fi
fi
unset __conda_setup |
__conda_setup="$('/opt/conda/bin/conda' 'shell.bash' 'hook' 2> /dev/null)"
if [ $? -eq 0 ]; then
eval "$__conda_setup"
else
if [ -f "/opt/conda/etc/profile.d/conda.sh" ]; then
. "/opt/conda/etc/profile.d/conda.sh"
else
export PATH="/opt/conda/bin:$PATH"
fi
fi
unset __conda_setup
安装完成conda以后,直接执行相同的操作,启动/bin/bash时默认就会激活base环境
Ln -s /opt/conda/etc/profile.d/conda.sh /etc/profile.d/conda.sh
echo ". /opt/conda/etc/profile.d/conda.sh" >> ~/.bashrc #
echo "conda activate base" >> ~/.bashrc
export PATH="/opt/conda/bin:$PATH" |
Ln -s /opt/conda/etc/profile.d/conda.sh /etc/profile.d/conda.sh
echo ". /opt/conda/etc/profile.d/conda.sh" >> ~/.bashrc #
echo "conda activate base" >> ~/.bashrc
export PATH="/opt/conda/bin:$PATH"
如果需要激活其他环境,需要先行配置好虚拟环境
修改
echo "conda activate base" >> ~/.bashrc |
echo "conda activate base" >> ~/.bashrc
为需要的环境即可
Nvidia CUDA开发环境 Docker容器启用显卡
1.准备docker>19.03 环境,配置好nvidia-container-toolkit
2.确定本机已安装的显卡驱动版本,匹配需要的容器版本
3.Pull基础docker镜像,可以从官方或者dockerhub下载
https://ngc.nvidia.com/catalog/containers/nvidia:cuda/tags
https://gitlab.com/nvidia/container-images/cuda
cuda10-py36-conda的Dockerfile
FROM nvidia/cuda:10.0-cudnn7-devel-ubuntu18.04
MAINTAINER Limc <limc@limc.com.cn>
#close frontend
ENV DEBIAN_FRONTEND noninteractive
# add cuda user
# --disabled-password = Don't assign a password
# using root group for OpenShift compatibility
ENV CUDA_USER_NAME=cuda10
ENV CUDA_USER_GROUP=root
# add user
RUN adduser --system --group --disabled-password --no-create-home --disabled-login $CUDA_USER_NAME
RUN adduser $CUDA_USER_NAME $CUDA_USER_GROUP
# Install basic dependencies
RUN apt-get update && apt-get install -y --no-install-recommends \
build-essential \
cmake \
git \
wget \
libopencv-dev \
libsnappy-dev \
python-dev \
python-pip \
#tzdata \
vim
# Install conda for python
RUN wget --quiet https://repo.anaconda.com/miniconda/Miniconda3-py37_4.8.2-Linux-x86_64.sh -O ~/miniconda.sh && \
/bin/bash ~/miniconda.sh -b -p /opt/conda && \
rm ~/miniconda.sh
# Set locale
ENV LANG C.UTF-8 LC_ALL=C.UTF-8
ENV PATH /opt/conda/bin:$PATH
RUN ln -s /opt/conda/etc/profile.d/conda.sh /etc/profile.d/conda.sh && \
echo ". /opt/conda/etc/profile.d/conda.sh" >> ~/.bashrc && \
echo "conda activate base" >> ~/.bashrc && \
find /opt/conda/ -follow -type f -name '*.a' -delete && \
find /opt/conda/ -follow -type f -name '*.js.map' -delete && \
/opt/conda/bin/conda clean -afy
# copy entrypoint.sh
#COPY ./entrypoint.sh /entrypoint.sh
# install
#ENTRYPOINT ["/entrypoint.sh"]
# Initialize workspace
COPY ./app /app
# make workdir
WORKDIR /app
# update pip if nesseary
#RUN pip install --upgrade --no-cache-dir pip
# install gunicorn
# RUN pip install --no-cache-dir -r ./requirements.txt
# install use conda
#RUN conda install --yes --file ./requirements.txt
RUN while read requirement; do conda install --yes $requirement; done < requirements.txt
# copy entrypoint.sh
COPY ./entrypoint.sh /entrypoint.sh
# install
ENTRYPOINT ["/entrypoint.sh"]
# switch to non-root user
USER $CUDA_USER_NAME |
FROM nvidia/cuda:10.0-cudnn7-devel-ubuntu18.04
MAINTAINER Limc <limc@limc.com.cn>
#close frontend
ENV DEBIAN_FRONTEND noninteractive
# add cuda user
# --disabled-password = Don't assign a password
# using root group for OpenShift compatibility
ENV CUDA_USER_NAME=cuda10
ENV CUDA_USER_GROUP=root
# add user
RUN adduser --system --group --disabled-password --no-create-home --disabled-login $CUDA_USER_NAME
RUN adduser $CUDA_USER_NAME $CUDA_USER_GROUP
# Install basic dependencies
RUN apt-get update && apt-get install -y --no-install-recommends \
build-essential \
cmake \
git \
wget \
libopencv-dev \
libsnappy-dev \
python-dev \
python-pip \
#tzdata \
vim
# Install conda for python
RUN wget --quiet https://repo.anaconda.com/miniconda/Miniconda3-py37_4.8.2-Linux-x86_64.sh -O ~/miniconda.sh && \
/bin/bash ~/miniconda.sh -b -p /opt/conda && \
rm ~/miniconda.sh
# Set locale
ENV LANG C.UTF-8 LC_ALL=C.UTF-8
ENV PATH /opt/conda/bin:$PATH
RUN ln -s /opt/conda/etc/profile.d/conda.sh /etc/profile.d/conda.sh && \
echo ". /opt/conda/etc/profile.d/conda.sh" >> ~/.bashrc && \
echo "conda activate base" >> ~/.bashrc && \
find /opt/conda/ -follow -type f -name '*.a' -delete && \
find /opt/conda/ -follow -type f -name '*.js.map' -delete && \
/opt/conda/bin/conda clean -afy
# copy entrypoint.sh
#COPY ./entrypoint.sh /entrypoint.sh
# install
#ENTRYPOINT ["/entrypoint.sh"]
# Initialize workspace
COPY ./app /app
# make workdir
WORKDIR /app
# update pip if nesseary
#RUN pip install --upgrade --no-cache-dir pip
# install gunicorn
# RUN pip install --no-cache-dir -r ./requirements.txt
# install use conda
#RUN conda install --yes --file ./requirements.txt
RUN while read requirement; do conda install --yes $requirement; done < requirements.txt
# copy entrypoint.sh
COPY ./entrypoint.sh /entrypoint.sh
# install
ENTRYPOINT ["/entrypoint.sh"]
# switch to non-root user
USER $CUDA_USER_NAME
运行容器Makefile
IMG:=`cat Name`
GPU_OPT:=all
MOUNT_ETC:=
MOUNT_LOG:=
MOUNT_APP:=-v `pwd`/work/app:/app
MOUNT:=$(MOUNT_ETC) $(MOUNT_LOG) $(MOUNT_APP)
EXT_VOL:=
PORT_MAP:=
LINK_MAP:=
RESTART:=no
CONTAINER_NAME:=docker-cuda10-py36-hello
echo:
echo $(IMG)
run:
docker rm $(CONTAINER_NAME) || echo
docker run -d --gpus $(GPU_OPT) --name $(CONTAINER_NAME) $(LINK_MAP) $(PORT_MAP) --restart=$(RESTART) \
$(EXT_VOL) $(MOUNT) $(IMG)
run_i:
docker rm $(CONTAINER_NAME) || echo
docker run -i -t --gpus $(GPU_OPT) --name $(CONTAINER_NAME) $(LINK_MAP) $(PORT_MAP) \
$(EXT_VOL) $(MOUNT) $(IMG) /bin/bash
exec_i:
docker exec -i -t --name $(CONTAINER_NAME) /bin/bash
stop:
docker stop $(CONTAINER_NAME)
rm: stop
docker rm $(CONTAINER_NAME) |
IMG:=`cat Name`
GPU_OPT:=all
MOUNT_ETC:=
MOUNT_LOG:=
MOUNT_APP:=-v `pwd`/work/app:/app
MOUNT:=$(MOUNT_ETC) $(MOUNT_LOG) $(MOUNT_APP)
EXT_VOL:=
PORT_MAP:=
LINK_MAP:=
RESTART:=no
CONTAINER_NAME:=docker-cuda10-py36-hello
echo:
echo $(IMG)
run:
docker rm $(CONTAINER_NAME) || echo
docker run -d --gpus $(GPU_OPT) --name $(CONTAINER_NAME) $(LINK_MAP) $(PORT_MAP) --restart=$(RESTART) \
$(EXT_VOL) $(MOUNT) $(IMG)
run_i:
docker rm $(CONTAINER_NAME) || echo
docker run -i -t --gpus $(GPU_OPT) --name $(CONTAINER_NAME) $(LINK_MAP) $(PORT_MAP) \
$(EXT_VOL) $(MOUNT) $(IMG) /bin/bash
exec_i:
docker exec -i -t --name $(CONTAINER_NAME) /bin/bash
stop:
docker stop $(CONTAINER_NAME)
rm: stop
docker rm $(CONTAINER_NAME)
Entrypoint.sh
set -e
# Add python as command if needed
if [ "${1:0:1}" = '-' ]; then
set -- python "$@"
fi
# Drop root privileges if we are running gunicorn
# allow the container to be started with `--user`
if [ "$1" = 'python' -a "$(id -u)" = '0' ]; then
# Change the ownership of user-mutable directories to gunicorn
for path in \
/app \
/usr/local/cuda/ \
; do
chown -R cuda10:root "$path"
done
set -- su-exec python "$@"
#exec su-exec elasticsearch "$BASH_SOURCE" "$@"
fi
# As argument is not related to gunicorn,
# then assume that user wants to run his own process,
# for example a `bash` shell to explore this image
exec "$@" |
set -e
# Add python as command if needed
if [ "${1:0:1}" = '-' ]; then
set -- python "$@"
fi
# Drop root privileges if we are running gunicorn
# allow the container to be started with `--user`
if [ "$1" = 'python' -a "$(id -u)" = '0' ]; then
# Change the ownership of user-mutable directories to gunicorn
for path in \
/app \
/usr/local/cuda/ \
; do
chown -R cuda10:root "$path"
done
set -- su-exec python "$@"
#exec su-exec elasticsearch "$BASH_SOURCE" "$@"
fi
# As argument is not related to gunicorn,
# then assume that user wants to run his own process,
# for example a `bash` shell to explore this image
exec "$@"
几个注意点
1.显卡运行需要root用户权限,否则会出现以下,
docker: Error response from daemon: OCI runtime create failed: container_linux.go:345
考虑安全性可以在容器内创建新用户并加入到root组
2.本机显卡驱动和CUDA必须匹配官方容器的版本,cudnn则不需要匹配,可以使用多个不同版本的cudnn,但是必须满足显卡要求的使用范围
3.docker运行容器非正常结束时会占用显卡,如果卡死,会造成容器外部无法使用,重启docker-daemon也无效,这时只能重启电脑
完整的源代码
https://github.com/limccn/ultrasound-nerve-segmentation-in-tensorflow/commit/d7de1cbeb641d2fae4f5a78ff590a0254667b398
参考
https://gitlab.com/nvidia/container-images/cuda
升级docker19.03使用nvidia-container-toolkit
docker升级到19.03以后,nvidia将提供原生的显卡支持,只需要安装
nvidia-container-toolkit工具包即可,
不再像使用nvidia-docker/2那样复杂配置,而且不支持用docker-compose
安装步骤
1.确认本机nvidia驱动安装正确,cuda和cudnn配置正常,官方文档说可以不需要在host配置cuda,
2.安装docker,可以参考,主要安装19.03以后的版本
https://docs.docker.com/engine/install/ubuntu/
3.添加nvidia-docker的源
curl -s -L https://nvidia.github.io/nvidia-docker/gpgkey | \
sudo apt-key add -
distribution=$(. /etc/os-release;echo $ID$VERSION_ID)
curl -s -L https://nvidia.github.io/nvidia-docker/$distribution/nvidia-docker.list | \
sudo tee /etc/apt/sources.list.d/nvidia-docker.list
sudo apt-get update |
curl -s -L https://nvidia.github.io/nvidia-docker/gpgkey | \
sudo apt-key add -
distribution=$(. /etc/os-release;echo $ID$VERSION_ID)
curl -s -L https://nvidia.github.io/nvidia-docker/$distribution/nvidia-docker.list | \
sudo tee /etc/apt/sources.list.d/nvidia-docker.list
sudo apt-get update
4.使用以下命令安装nvidia-container-toolkit,重启docker
sudo apt-get install -y nvidia-container-toolkit
#restart docker
sudo systemctl restart docker |
sudo apt-get install -y nvidia-container-toolkit
#restart docker
sudo systemctl restart docker
5.如果本机已安装nvidia-docker2,可以单独完成安装nvidia-container-toolkit,且相互不影响,
官方虽然已经宣布nvidia-docker2 deprecated了,但是继续使用是没问题的
使用上的主要区别
使用nvidia-container-toolkit
#使用nvidia-container-toolkit
docker run --gpus "device=1,2" |
#使用nvidia-container-toolkit
docker run --gpus "device=1,2"
使用nvidia-docker2
#使用nvidia-docker2,已deprecated,但是还能继续用
docker run --runtime=nvidia |
#使用nvidia-docker2,已deprecated,但是还能继续用
docker run --runtime=nvidia
使用nvidia-docker
#使用nvidia-docker
nvidia-docker run |
#使用nvidia-docker
nvidia-docker run
几个坑
1. nvidia-container-toolkit和nvidia-docker2的容器image位置不一样且不通用,如果要混用,需要根据需要选择不同版本的容器
2.nvidia-container-toolkit的多显卡支持目前测试没成功,容器跑最好还是单个显卡吧。可能跟host配置有关
参考
https://docs.nvidia.com/deeplearning/frameworks/user-guide/index.html
https://docs.nvidia.com/ngc/ngc-aws-setup-guide/running-containers.html#preparing-to-run-containers
https://github.com/NVIDIA/nvidia-docker
https://nvidia.github.io/nvidia-docker/
对于只需要经过uplink连接外网,不需要跟交换机内其他端口交换的端口,可以通过隔离端口组方式实现二层物理隔离。
VLAN-1020 10.20.0.0/16 eg1/0/17-eg1/0/18 access
注意:
1.一个端口只能加入到一个端口隔离组
2.不会隔离经由trunk/uplink的交换
1.创建端口隔离组
# 切换系统视图
sys
# 创建物理隔离端口组
[H3C] port-isolate group 1
#切换到GigabitEthernet1/0/17
[H3C] interface GigabitEthernet 1/0/17
#加入端口隔离组1
[H3C-GigabitEthernet1/0/17]port-isolate enable group 1
#打开端口
[H3C-GigabitEthernet1/0/17]undo shutdown
#完成
[H3C-GigabitEthernet1/0/17]quit
#切换到GigabitEthernet1/0/18
[H3C] interface GigabitEthernet 1/0/18
#加入端口隔离组1
[H3C-GigabitEthernet1/0/18]port-isolate enable group 1
#打开端口
[H3C-GigabitEthernet1/0/18]undo shutdown
#完成
[H3C-GigabitEthernet1/0/18]quit
2.管理
#查看端口隔离组
[H3C] display port-isolate group 1
[CENTOS]使用cosfs挂载腾讯云COS到VPS服务器上
事前准备
1.准备好用于挂载的Bucket,配置好权限
2.获得可以用于挂载Bucket,accesskey和secret
腾讯官方Cosfs的地址
https://github.com/tencentyun/cosfs/
1.下载cosfs的安装包
Wget https://github.com/tencentyun/cosfs/releases/download/v1.0.14/cosfs-1.0.14-centos7.0.x86_64.rpm |
Wget https://github.com/tencentyun/cosfs/releases/download/v1.0.14/cosfs-1.0.14-centos7.0.x86_64.rpm
2.本地安装
sudo yum localinstall cosfs-1.0.14-centos7.0.x86_64.rpm |
sudo yum localinstall cosfs-1.0.14-centos7.0.x86_64.rpm
3.配置访问
将Bucket名称以及具有此Bucket访问权限的AccessKeyId/AccessKeySecret信息存放在/etc/passwd-cosfs文件中。注意这个文件的权限必须正确设置,建议设为640。
echo my-bucket:my-access-key-id:my-access-key-secret > /etc/passwd-cosfs
chmod 640 /etc/passwd-cosfs |
echo my-bucket:my-access-key-id:my-access-key-secret > /etc/passwd-cosfs
chmod 640 /etc/passwd-cosfs
4.将Bucket挂载到指定目录。
# 读取
Cosfs my-bucket my-mount-point -ourl=my-cos-endpoint |
Cosfs my-bucket my-mount-point -ourl=my-cos-endpoint
# 777权限方式
Cosfs my-bucket my-mount-point -ourl=my-cos-endpoint -oallow_other |
Cosfs my-bucket my-mount-point -ourl=my-cos-endpoint -oallow_other
5.卸载已挂载的磁盘
fusermount -u my-mount-point |
fusermount -u my-mount-point
几个坑
1.如果当远程硬盘用,允许非Root用户和其他用户读写,需要加-oallow_other,
2.cosfs 会扫描cos里面的文件内容,如果文件比较多,还是避免ls,find之类的操作。
3.不建议使用fstab方式开机启动,可能会让你的vps无法重启
参考
https://cloud.tencent.com/document/product/436/6883
[CENTOS]使用ossfs挂载aliyun OSS到VPS服务器
事前准备
1.准备好用于挂载的Bucket,配置好权限
2.获得可以用于挂载Bucket,accesskey和secret
1.下载安装ossfs安装包
wget http://gosspublic.alicdn.com/ossfs/ossfs_1.80.6_centos7.0_x86_64.rpm |
wget http://gosspublic.alicdn.com/ossfs/ossfs_1.80.6_centos7.0_x86_64.rpm
2.本地安装
sudo yum localinstall ossfs_1.80.6_centos7.0_x86_64.rpm |
sudo yum localinstall ossfs_1.80.6_centos7.0_x86_64.rpm
3.配置访问
将Bucket名称以及具有此Bucket访问权限的AccessKeyId/AccessKeySecret信息存放在/etc/passwd-ossfs文件中。注意这个文件的权限必须正确设置,建议设为640。
echo my-bucket:my-access-key-id:my-access-key-secret > /etc/passwd-ossfs
chmod 640 /etc/passwd-ossfs |
echo my-bucket:my-access-key-id:my-access-key-secret > /etc/passwd-ossfs
chmod 640 /etc/passwd-ossfs
4.将Bucket挂载到指定目录。
# 非共享
ossfs my-bucket my-mount-point -ourl=my-oss-endpoint |
# 非共享
ossfs my-bucket my-mount-point -ourl=my-oss-endpoint
# 777权限方式,非root用户可以用
ossfs my-bucket my-mount-point -ourl=my-oss-endpoint -o allow_other |
# 777权限方式,非root用户可以用
ossfs my-bucket my-mount-point -ourl=my-oss-endpoint -o allow_other
5.卸载已挂载的
fusermount -u my-mount-point
几个坑
1.如果要支持写入,并且控制文件权限,用户必须对Bucket有完全控制权限,否则下次挂载以后权限配置丢失。
2.如果当远程硬盘用,允许非Root用户和其他用户读写,需要加-o allow_other,
3.大文件上传会在OSS Bucket存储碎片,尽量减少大文件的传送,内网再快也是有延迟的。
4.流方式或者低级别的磁盘读写会直接卡死OSS,比如dd命令。毕竟不是真硬盘。
5.ossfs 会扫描oss里面的文件内容,如果文件比较多,还是避免ls,find之类的操作。
6.不建议使用fstab方式开机启动,可能会让你的vps无法重启
参考
https://help.aliyun.com/document_detail/153892.html?spm=a2c4g.11186623.6.750.2b03142bM5YPG3
Recent Comments