背景
技术架构
springboot nacos seata dubbo
产生原因
在原项目上想对服务进行拆分,加入了其他分布式服务模块。
后来发现拆分难度太大,耦合性太高。于是把自己添加的模块又给删除了,移除了父工程pom 相应的文件后启动报错,但是原本服务还是可以使用。
所有服务均报错
2020-12-30 15:29:44.380 ERROR 16580 --- [imeoutChecker_2] i.s.c.r.netty.NettyClientChannelManager : no available service 'default' found, please make sure registry config correct
2020-12-30 15:29:54.205 ERROR 16580 --- [imeoutChecker_1] i.s.c.r.netty.NettyClientChannelManager : no available service 'default' found, please make sure registry config correct
2020-12-30 15:29:54.379 ERROR 16580 --- [imeoutChecker_2] i.s.c.r.netty.NettyClientChannelManager : no available service 'default' found, please make sure registry config correct
解决
seata nacos 相关依赖和配置时间太久了,记得清楚了。于是换了预发环境碰碰运气还是不行
看源码
seata的初始化方法
重新连接到当前事务服务组的远程服务器方法出现此ERROR提示
因为上图的List为空,所以打印error日志,进入getAvailServerList方法
availInetSocketAddressList为空继续进入lookup方法
lookup方法
其返回的值就为空
继续进入reconnect的RegistryFactory.getInstance().getServiceGroup(transactionServiceGroup)方法
根据上图所示,其根据service.vgroupMapping.my_test_tx_group来查找对应的
进入getConfig方法
最终进入 NacosConfigService配置类的getConfigInner方法,里面获取的
该方法优先使用本地配置,由于我本地没有配置,所以为null,其次才使用远程配置,继续进入远程配置里面的方法
OK,远程找不到配置404,日,赶忙打开数据库的nacos config表,空白,关于seata的全没了 操类,根源在此
重新将seata配置导入nacos后不报错,一切正常
大功告成
public abstract class AbstractRpcRemotingClient extends AbstractRpcRemoting
implements RegisterMsgListener, ClientMessageSender {
@Override
public void init() {
clientBootstrap.setChannelHandlers(new ClientHandler());
clientBootstrap.start();
timerExecutor.scheduleAtFixedRate(new Runnable() {
@Override
public void run() {
clientChannelManager.reconnect(getTransactionServiceGroup());
}
}, SCHEDULE_DELAY_MILLS, SCHEDULE_INTERVAL_MILLS, TimeUnit.MILLISECONDS);
if (NettyClientConfig.isEnableClientBatchSendRequest()) {
mergeSendExecutorService = new ThreadPoolExecutor(MAX_MERGE_SEND_THREAD,
MAX_MERGE_SEND_THREAD,
KEEP_ALIVE_TIME, TimeUnit.MILLISECONDS,
new LinkedBlockingQueue<>(),
new NamedThreadFactory(getThreadPrefix(), MAX_MERGE_SEND_THREAD));
mergeSendExecutorService.submit(new MergedSendRunnable());
}
super.init();
}
}
class NettyClientChannelManager {
void reconnect(String transactionServiceGroup) {
List<String> availList = null;
try {
availList = getAvailServerList(transactionServiceGroup);
} catch (Exception e) {
LOGGER.error("Failed to get available servers: {}", e.getMessage(), e);
return;
}
if (CollectionUtils.isEmpty(availList)) {
String serviceGroup = RegistryFactory.getInstance()
.getServiceGroup(transactionServiceGroup);
LOGGER.error("no available service '{}' found, please make sure registry config correct", serviceGroup);
return;
}
for (String serverAddress : availList) {
try {
acquireChannel(serverAddress);
} catch (Exception e) {
LOGGER.error("{} can not connect to {} cause:{}",FrameworkErrorCode.NetConnect.getErrCode(), serverAddress, e.getMessage(), e);
}
}
}
}
public class NacosConfigService implements ConfigService {
private String getConfigInner(String tenant, String dataId, String group, long timeoutMs) throws NacosException {
group = null2defaultGroup(group);
ParamUtils.checkKeyParam(dataId, group);
ConfigResponse cr = new ConfigResponse();
cr.setDataId(dataId);
cr.setTenant(tenant);
cr.setGroup(group);
// 优先使用本地配置
String content = LocalConfigInfoProcessor.getFailover(agent.getName(), dataId, group, tenant);
if (content != null) {
LOGGER.warn("[{}] [get-config] get failover ok, dataId={}, group={}, tenant={}, config={}", agent.getName(),
dataId, group, tenant, ContentUtils.truncateContent(content));
cr.setContent(content);
configFilterChainManager.doFilter(null, cr);
content = cr.getContent();
return content;
}
try {
content = worker.getServerConfig(dataId, group, tenant, timeoutMs);
cr.setContent(content);
configFilterChainManager.doFilter(null, cr);
content = cr.getContent();
return content;
} catch (NacosException ioe) {
if (NacosException.NO_RIGHT == ioe.getErrCode()) {
throw ioe;
}
LOGGER.warn("[{}] [get-config] get from server error, dataId={}, group={}, tenant={}, msg={}",
agent.getName(), dataId, group, tenant, ioe.toString());
}
LOGGER.warn("[{}] [get-config] get snapshot ok, dataId={}, group={}, tenant={}, config={}", agent.getName(),
dataId, group, tenant, ContentUtils.truncateContent(content));
content = LocalConfigInfoProcessor.getSnapshot(agent.getName(), dataId, group, tenant);
cr.setContent(content);
configFilterChainManager.doFilter(null, cr);
content = cr.getContent();
return content;
}
}
总结
此种情况是seata和注册中心的配置问题,最好先观察注册中心的配置有没有改动,然后再对症下药,节省很多时间。
如果可能的话跟着源码走一遍,知其所以然,能更确切地知道错误点在哪里
Comments | 0 条评论